text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
\begin{document}
\title{Efficient algorithms for highly compressed data: {T}he Word Problem in {H}igman's group is in {P}} \author{Volker Diekert\inst{1}, J{\"u}rn Laun\inst{1}, Alexander Ushakov\inst{2}} \institute{FMI, Universit\"at Stuttgart, Universit\"atsstr. 38, D-70569 Stuttgart, Germany\and Department of Mathematics, Stevens Institute of Technology, Hoboken, NJ 07030, USA}
\date{\today}
\maketitle
\begin{abstract} Power circuits are data structures which support efficient algorithms for highly compressed integers. Using this new data structure it has been shown recently by Myasnikov, Ushakov and Won that the Word Problem of the one-relator Baums\-lag\xspace group is in P. Before that the best known upper bound has been non-elementary. In the present paper we provide new results for power circuits and we give new applications in algorithmic algebra and algorithmic group theory: 1.~We define a modified reduction procedure on power circuits which runs in quadratic time thereby improving the known cubic time complexity. The improvement is crucial for our other results. 2.~We improve the complexity of the Word Problem for the Baums\-lag\xspace group to cubic time thereby providing the first practical algorithm for that problem. 3.~The main result is that the Word Problem of Higman's group is decidable in polynomial time. The situation for Higman's group is more complicated than for the Baums\-lag\xspace group and forced us to advance the theory of power circuit\xspace{}s. \end{abstract}
{\small {\bf Key words:} Data structures, Compression, Algorithmic group theory, Word Problem
\section*{Introduction}\label{intro}
\emph{Power circuits} have been introduced in \cite{muw11pc}. It is a data structure for integers which supports +, $-$, $\leq$, a restricted version of multiplication, and raising to the power of 2. Thus, by iteration it is possible to represent (huge) values involving the tower function by very small circuits. Another way to say this is that efficient algorithms for power circuit\xspace{}s yield efficient algorithms for arithmetic with integers in highly compressed form. This idea of \ei{efficient algorithms for highly compressed data} is the main underlying theme of the present paper. In this sense our paper is more about compression and data structures than about algorithmic group theory. However, the applications are in this area so far.
Indeed as a first application of power circuit\xspace{}s, \cite{muw11bg} showed that the Word Problem\xspace of the Baums\-lag\xspace group\footnote{Sometimes called Baums\-lag\xspace-Gersten group, e.g.{} in \cite{plat04} or in a preliminary version of \cite{muw11bg}.} is solvable in polynomial time. Algorithmic interests have a long history in combinatorial group theory. In 1910 Max Dehn \cite{dehn11} formulated fundamental algorithmic problems for (finitely presented) groups. The most prominent one is the \ei{Word Problem\xspace:} "Given a finite presentation of some fixed group $G$, decide whether an input word $w$ represents the trivial element $1_G$ in $G$." It took more than four decades until Novikov and Boone showed (independently) in the 1950's the existence of a fixed finitely presented group with an undecidable Word Problem\xspace, \cite{nov55,boone59}. It is also true that there are finitely presented groups with a decidable Word Problem\xspace but with arbitrarily high complexity, \cite[Theorem 1.3]{brs02}. In these examples the difficult instances for the Word Problem\xspace are extremely sparse, (because they encode Turing machine computations) and, inherently due to the constructions, these groups never appear in any natural setting.
In contrast, the Baums\-lag\xspace group $G_{(1,2)}$ is given by a single defining relation, see \refsec{wpg}. (It is a non-cyclic one-relator group all of whose finite factor groups are cyclic \cite{baumslag69}.)
It has been a natural (and simplest) candidate for a group with a non-polynomial Word Problem\xspace in the worst case, because the Dehn function\footnote{We do not use any result about Dehn functions here, and we refer the interested reader e.g.{} to Wikipedia (or to the appendix) for a formal definition.} of $G_{(1,2)}$ is non-elementary by a result due to Gersten \cite{gersten91}, see also \cite{plat04}. Moreover, the only general way to solve the word problem in one-relator groups is by a Magnus break-down procedure \cite{mag32,LS01} which computes normal forms. It was developed in the 1930s and there is no progress ever since. Its time-complexity on $G_{(1,2)}$ is non-elementary, since it cannot be bounded by any tower of exponents.
So, the question of algorithmic hardness of the Word Problem\xspace in one-relator groups is still wide open. Some researchers conjecture it is polynomial (even quadratic, see \cite{BMS}), based on observations on generic-case complexity \cite{KMSS1}. Others conjecture that it cannot be polynomial, based on the fast growing Dehn functions. (Note that the Dehn function gives a lot information about the group. E.g., if it is linear, then the group is hyperbolic, and the Word Problem\xspace is linear. If it is computable, then the Word Problem\xspace is decidable \cite{MadlenerO88}.)
The contributions of the present paper are as follows: In a first part, we give new efficient manipulations of the data structure of \emph{power circuit\xspace{s}}.
Concretely, we define a new reduction procedure called \proc{ExtendTree}\xspace on power circuits. It improves the complexity of the reduction algorithm from cubic to quadratic time. This is our first result. It turns out to be essential, because reduction is a fundamental tool and applied as a black-box operation frequently. For example, with the help of a better reduction algorithm (and some other ideas) we can improve as our second result the complexity of the Word Problem\xspace in $G_{(1,2)}$ significantly from $\mathcal{O}(n^7)$ in \cite{muw11bg} down to $\mathcal{O}(n^3)$, \refthm{wpbg}. This cubic algorithm is the first practical algorithm which works for that problem on all reasonably short instances.
The basic structure in our paper is the domain of rational numbers, where nominators are restricted to powers of two. Thus, we are working in the ring $\mathbb{Z}[1/2]$. We view $\mathbb{Z}[1/2]$ as an Abelian group where multiplication with $1/2$ is an automorphism. This in turn can be embedded into a semi-direct product $\sdpz$ which is the set of pairs $(r,k) \in \mathbb{Z}[1/2] \times \mathbb{Z}$ with the multiplication $(r,k)(s,\ell) = (r + 2^k s, k+ \ell)$. There is a natural partially defined swap operation which interchanges the first and second component. Semi-direct products (or more generally, wreath products) appear in various places as basic mathematical objects, and this makes the algebra $\sdpz$ with swapping interesting in its own right. This algebra has a Word Problem\xspace in $\mathcal{O}(n^4)$ (\refthm{wpsdpz}). The Word Problem\xspace of $G_{(1,2)}$ can be understood as a special case with the better $\mathcal{O}(n^3)$ performance.
Another new application of power circuit\xspace{s} shows that the Word Problem\xspace in Higman's group $H_4$ is decidable in polynomial time. (We could also consider any $H_q$ with $q \geq 4$.) This is our third and main result. Higman \cite{higman51} constructed $H_4$ in 1951 as the first example of a finitely presented
infinite group where all finite quotient groups are trivial. This leads immediately to a finitely generated infinite simple quotient group of $H_4$; and no such group was known before Higman's construction. The group $H_4$ is constructed by a few operations involving amalgamation (see e.g. \cite{serre80}). Hence, a Magnus break-down procedure (for amalgamated products) yields decidability of the Word Problem\xspace. The procedure computes normal forms, but the length of normal forms can be a tower function in the input length. (More accurately, one can show that the Dehn function of $H_4$ has an order of magnitude as a tower function \cite{bridson10}.) Thus, Higman's group has been another natural, but rather complicated candidate for a finitely presented group with an extremely hard Word Problem\xspace. Our paper eliminates $H_4$ as a candidate: We show that the Word Problem\xspace of $H_4$ is in $\mathcal{O}(n^6)$ (\refthm{wph}).
We obtain this result by new techniques for efficient manipulations of multiple markings in a single power circuit\xspace and their ability for huge compression rates. Compression techniques have been applied elsewhere for solving word problems, \cite{Lohrey06siam,LohreyS07,schleimer08,HauboldL09}. But in these papers the authors use straight-line programs whose compression rates are far too small (at best exponential) to cope with Baums\-lag\xspace or Higman groups.
Due to lack of space some few proofs are shifted to the appendix.
\section{Notation and preliminaries}\label{notation} Algorithms and (decision) problems are classified by their \ei{time complexity} on a random-access machine (RAM). Frequently we use the notion of \ei{amortized analysis} with respect to a \ei{potential function}, see e.g. in \cite[Sect. 17.3]{CLRS09}.
The \ei{tower function} $\tau:\mathbb{N} \to 2^\mathbb{N}$ is defined as usual: $\tau(0) = 1$ and $\tau(i+1) = 2^{\tau(i)}$ for $i \geq 0$. Thus, e.g. $\tau(4) = 2^{2^{2^{2^{1}}}}= 2^{16}$ and $\tau(6)$ written in binary requires more bits than there are supposed to be electrons in this universe.
We use standard notation and facts from group theory as the reader can find in the classical text book \cite{LS01}. In particular, we apply the standard (so called Magnus break-down) procedure for solving the word problem in HNN-extensions and amalgamated products. All HNN-extensions and amalgamated products in this paper have an explicit finite presentation. Amalgamated products are denoted by $G*_A H$ where $A$ is a subgroup in $G$ and in $H$. The formal definition of $G*_A H$ creates first a disjoint copy $H'$ of $H$. Then one considers the free product $G* H$ and adds defining relations identifying $a \in A$ with its copy $a' \in A'$. We refer to \cite{serre80} for the basic facts about Higman groups.
\section{Power circuits}\label{PCs}
This section is based on \cite{muw11pc}, but we also provide new material like our treatment of multiple markings and improved time complexities.\footnote{In order to keep the paper self-contained and as we use a slightly different notation we give full proofs in the appendix.} Let $\GG$ be a set and $\delta$ be a mapping $\delta: \GG \times \GG\to \oneset{-1,0,+1}$. This defines a directed graph $(\GG, \DD)$, where $\GG$ is the set of vertices and the set of directed arcs (or edges) is $\DD=\set{(P,Q)\in \GG \times \GG}{\delta(P,Q)\neq 0}$ (the support of the mapping $\delta$). Throughout we assume that $(\GG, \DD)$ is a \ei{dag} (\ei{directed acyclic graph}). In particular, $\delta(P,P)=0$ for all vertices $P$.
A \ei{marking} is a mapping $M:\GG\to\oneset{-1,0,+1}$. We can also think of a marking as a subset of $\GG$ where each element in $M$ has a sign ($+$ or $-$). (Thus, we also speak about a \ei{signed subset}.) Each node $P\in \GG$ is associated in a natural way with a marking, which is called its $\LL$-marking $\LL_P$ and which is defined as follows: $$\LL_P: \GG\to \oneset{-1,0,+1}, \; Q \mapsto \delta(P,Q)$$ Thus, the marking $\LL_P$ is the signed subset which corresponds to the targets of outgoing arcs from $P$.
We define the \ei{evaluation} $\eps(P)$ of a node ($\eps(M)$ of a marking resp.) bottom-up in the dag by induction: \begin{align*} \eps(P) &= 2^{\eps(\LL_P)} &\text{for a node $P$}, \\ \eps(M) &= \sum_{P}M(P)\eps(P) &\text{for a marking $M$}. \end{align*} Note that leaves evaluate to $1$, the evaluation of a marking is a real number, and the evaluation of a node $P$ is a positive real number. Thus, $\eps(P)$ and $\eps(M)$ are well-defined. We have the following nice formula for nodes: $ \log_2(\eps(P)) = \eps(\LL_P)$. Therefore we can view the marking $\LL_P$ as "taking the logarithm of $P$".
\begin{definition}\label{def:PC} A \ei{power circuit\xspace} is a pair $\Pi=(\GG,\delta)$ with $\delta: \GG \times \GG\to \oneset{-1,0,+1}$ such that $(\GG, \DD)$ is a dag as above with the additional property that $\eps(M)\in\mathbb{Z}$ for all markings $M$. \end{definition}
We will see later in \refcor{PCtest} that it is possible to check in quadratic time whether or not a dag $(\GG, \DD)$ is a power circuit\xspace. (One checks $\eps(\LL_P)\geq 0$ for all nodes $P$, \reflem{lem:integervalues} in the appendix.)
\begin{example}\label{binarybasis} We can represent every integer in the range $[-n,n]$ as the evaluation of some marking in a power circuit\xspace with node set $\oneset{P_0 \lds P_\ell}$ such that $\eps(P_i) =2^{i}$ for $0 \leq i \leq \ell$ and $\ell = \floor{\log_2 n}$. Thus, we can convert the binary notation of an integer $n$ into a power circuit\xspace with $\mathcal{O}(\log \abs n)$ vertices and $\mathcal{O}(((\log \abs n)\log\log \abs n)$ arcs. \end{example}
\begin{example}\label{powtow} A power circuit\xspace can realize tower functions, since a line of $n+1$ nodes allows to represent $\tau(n)$ as the evaluation of the last node. \end{example}
Sometimes it is convenient to think of a marking $M$ as a formal sum $M = \sum_{P} M(P)P$. In particular, $-M$ denotes a marking with $\eps(-M) = -\eps(M)$. For a marking $M$ we denote by $\sigma(M)$ its \ei{support}, i.e., $$\sigma(M)= \set{P\in \GG}{M(P)\neq 0}\subseteq \GG.$$
We say that $M$ is \ei{compact}, if we have $\eps(P) \neq \eps(Q)\neq 2 \eps(P)$ for all $P,Q \in \sigma(M)$, $P\neq Q$. If $M$ is compact, then we have $\eps(M)=0$ if and only if\xspace $\sigma(M)=\emptyset$, and we have $\eps(M)>0$ if and only if\xspace $M(P)$ is positive for the node $P$ having the maximal value in $\sigma(M)$.
The insertion of a new node $\proc{clone}\xspace(P)$ without incoming arcs and with $\LL_{\proc{clone}\xspace(P)}=\LL_P$ is called \ei{cloning of a node} $P$. It is extended to markings, where $\proc{clone}\xspace(M)$ is obtained by cloning all nodes in $\sigma(M)$ and defining $M(\proc{clone}\xspace(P))=M(P)$ for $P\in\sigma(M)$ and $M(\proc{clone}\xspace(P))=0$ otherwise. We say that $M$ is a \ei{source}, if no node in $\sigma(M)$ has any incoming arcs. Note that $\proc{clone}\xspace(M)$ is always a source.
If $M = \sum_{P} M(P)P$ and $K = \sum_{P} K(P)P$ are markings, then $M +K = \sum_{P} (M(P)+K(P))P$ is a formal sum where coefficients $-2$ and $2$ may appear. For $M(P)+K(P)= \pm 2$, let $P'=\proc{clone}\xspace(P)$. We define a marking $(M+K)'$ by putting $(M+K)'(P)= (M+K)'(P')= \pm 1$. In this way we can realize addition (and subtraction) in a power circuit\xspace by cloning at most $\abs{\sigma(M) \cap \sigma(K)}$ nodes.
Next, consider markings $U$ and $X$ with $\eps(U)=u$ and $\eps(X)= x$ such that $u2^x \in \mathbb{Z}$ (e.g. due to $x \geq 0$). We obtain a marking $V$ with $\eps(V)=u2^x$ and $\abs{\sigma(V)}= \abs{\sigma(U)}$ as follows. First, let $V=\proc{clone}\xspace(U)$ and $X'=\proc{clone}\xspace(X)$. Next, introduce additional arcs between all $P'\in\sigma(V)$ and $Q'\in X'$ with $\delta(P',Q')=X'(Q')$. Note that the cloning of $X$ avoids double arcs from $V$ to $X$. The cloning of $U$ is not necessary, if $U$ happens to be a source.
We now introduce an alternative representation for {power circuit\xspace}s which allows us to compare markings efficiently. The process of tranforming a $\Pi$ into this so-called tree representation is referred to as \ei{reduction of a power circuit\xspace}.
\begin{definition}\label{def:treerep} A \ei{tree representation} of a power circuit\xspace $\Pi=(\GG,\delta)$ consists of \begin{enumerate}[i)] \item $\GG$ as a list $[P_1\lds P_n]$ such that $\eps(P_i)<\eps(P_{i+1})$ for all $1\le i<n$, \item a bit vector $b(1)\lds b(n-1)$ where $b(i)=1$ if and only if\xspace $2\eps(P_i)=\eps(P_{i+1})$, and \item\label{def:treerep:tree} a ternary tree of height $n$, where each node has at most three outgoing edges, labeled by $+1$, $0$, and $-1$. All leaves are at level $n$, and each leaf represents the marking $M:\GG\to\oneset{-1,0,+1}$ given by the labels of the unique path of length $n$ from the leaf to the root of the tree. These markings must be compact. Furthermore, all markings $\LL_P$ for $P\in\GG$ are represented by leaves. Finally, for each level in the tree $T$, we keep a list of the nodes in that level. \end{enumerate} \end{definition}
If a path from some leaf to the root is labeled $(0,+1,-1,0,+1)$, then that leaf represents the marking $P_2-P_3+P_5$ and we know (due to compactness) $\eps(P_2)<2\eps(P_3)<4\eps(P_5)$. The amount of memory for storing a tree representation is bounded by $\mathcal{O}(\abs{\GG}\cdot(\text{number of leaves}))$. Part \ref{def:treerep:tree}) of the definition is only needed inside the procedure \proc{ExtendTree}\xspace, which is explained in the appendix. For simplicity the reader might want to ignore it in a first reading and just think of a tree representation as the power circuit\xspace graph plus an ordering of the nodes and a bit vector keeping track of doubles.
\begin{proposition}[\cite{muw11pc}]\label{testintree} There is a $\mathcal{O}(\abs{\GG})$ time algorithm which on input a tree representation $\Delta$ of a power circuit\xspace and two markings $K$ and $M$ (given as leaves) compares $\eps(K)$ and $\eps(M)$. It outputs whether the two values are equal and if not, which one of them is larger. In the latter case it also tells whether their difference is $1$ or $\ge 2$. \end{proposition}
\begin{proof} Start at the root of the tree and go down the paths corresponding to $K$ and $M$ in parallel. The first pair of different labels on these paths determines the larger value. The check whether $\eps(K)=\eps(M)+1$ is equally easy due to compactness. \qed\end{proof}
\begin{definition}\label{def:chain} Let $\Pi=(\GG,\delta)$ be a power circuit\xspace. A chain (of length $r$) in $\Pi$ is a sequence of nodes $(P_0,P_1\lds P_r)$ where $\eps(P_i)=2^i\eps(P_0)$ ($0\le i\le r$). A chain is maximal if it is not part of a longer chain. The number of maximal chains in $\Pi$ is denoted $\ensuremath{\mathrm{c}}(\Pi)$. We define the \ei{potential} of $\Pi$ to be $\mathop{\mathrm{pot}}(\Pi)=\ensuremath{\mathrm{c}}(\Pi)\cdot\abs{\GG}$. \end{definition}
The following statement uses amortized time w.r.t.{} the potential function $\mathop{\mathrm{pot}}(\Pi)$. Note that the potential $\mathop{\mathrm{pot}}(\Pi)$ remains bounded by $\abs{\GG}^2$ since $\ensuremath{\mathrm{c}}(\Pi)\le\abs{\GG}$.
\begin{theorem}\label{extendtree} The following procedure $\proc{ExtendTree}\xspace$ runs in amortized time $\mathcal{O}((\abs{\GG}+\abs{U})\cdot\abs{U})$:
Input: A dag $\Pi=(\GG\dot\cup U,\delta)$, where $\GG$ and $U$ are disjoint with no arcs pointing from $\GG$ to $U$ and such that $(\GG,\delta\vert_{\GG\times\GG})$ is a power circuit\xspace in tree representation. The potential is defined by the potential of its power circuit\xspace-part $\mathop{\mathrm{pot}}((\GG,\delta\vert_{\GG\times\GG})).$ The output of the procedure is "no", if $\Pi$ is not a power circuit\xspace (because $\eps(P)\not\in\mathbb{Z}$ for some node $P$). In the other case, the output is a tree representation of a power circuit\xspace $\Pi'=(\GG',\delta')$ where: \begin{enumerate}[i)] \item\label{et:sub} $\GG \subseteq \GG'$ and $\delta\vert_{\GG\times\GG}= \delta'\vert_{\GG\times\GG}$. \item\label{et:size} $\abs{\GG'}\le\abs{\GG}+3\abs{U}+(\ensuremath{\mathrm{c}}(\Pi)-\ensuremath{\mathrm{c}}(\Pi'))$ \item\label{et:map} For all $Q\in U$ there exists a node $Q'\in\GG'$ with $\eps(Q)=\eps(Q')$. \item\label{et:mark} For every marking $M$ in $\Pi$ there exists a marking $M'$ in $\Pi'$ with $\eps(M')=\eps(M)$ and $\abs{\sigma(M')}\leq\abs{\sigma(M)}$. \end{enumerate} For $\sigma(M)\subseteq\GG$ we can choose $M=M'$ by \ref{et:sub}). If some further markings $M_1,M_2\lds M_m$ are part of the input (those where $\sigma(M)\cap U\neq \emptyset$), we need additional amortized time $\mathcal{O}(\abs{\sigma(M_1)}+\cdots+\abs{\sigma(M_m)}+m\cdot(\abs{\GG}+\abs{U}))$ to find the corresponding $M_1',M_2'\lds M_m'$. \end{theorem}
\begin{corollary}\label{maketree} There is a $\mathcal{O}(\abs{\GG}^2)$ time procedure \proc{MakeTree}\xspace that given a (graph representation) of a power circuit\xspace $\Pi=(\GG,\delta)$ computes a tree representation. The number of nodes at most triples. \end{corollary}
\begin{proof} This is a special case of \refthm{extendtree} when $U=\Pi$. \qed\end{proof}
\begin{corollary}\label{PCtest} The test whether a dag $(\GG,\delta)$ defines a power circuit\xspace can be done in $\mathcal{O}(\abs{\GG}^2)$. \end{corollary}
The efficiency of \proc{MakeTree}\xspace is crucial for all our results. In particular, \refcor{maketree} improves the cubic time complexity of \cite{muw11pc} for reduction to quadratic.
\section{Arithmetic in the semi-direct product $\mathbb{Z}[1/2] \rtimes \mathbb{Z}$}\label{semi}
The basic data structure for this paper deals with the semi-direct product $\mathbb{Z}[1/2] \rtimes \mathbb{Z}$. Here $\mathbb{Z}[1/2]$ denotes the ring of rational numbers with denominators in $2^\mathbb{N}$. Thus, an element in $\mathbb{Z}[1/2]$ is a rational number $r$ which can be written as $r = u2^x$ with $u,x \in \mathbb{Z}$. We view $\mathbb{Z}[1/2]$ as an abelian group with addition. Multiplication by $2$ defines an automorphism of $\mathbb{Z}[1/2]$, and hence the semi-direct product $\mathbb{Z}[1/2] \rtimes \mathbb{Z}$ becomes a (non-commutative) group where elements are pairs $(r,m) \in \mathbb{Z}[1/2] \times \mathbb{Z}$ and with the following explicit formula for multiplication: $$ (r,m) \cdot (s,n) = (r + 2^m s, m+n)$$ The semi-direct product $\mathbb{Z}[1/2] \rtimes \mathbb{Z}$ is also isomorphic to a group with two generators $a$ and $t$ and the defining relation $ta t^{-1}= a^2$. This group is known as the Baumslag-Solitar group $\mathrm{\bf{BS}}(1,2)$. The isomorphism from $\mathrm{\bf{BS}}(1,2)$ to $\mathbb{Z}[1/2] \rtimes \mathbb{Z}$ maps $a$ to $(1,0)$ and $t$ to $(0,1)$. This is a homomorphism\xspace due to $(0,1)(1,0)(0,-1)= (2,0)$. It is straightforward to see that it is actually bijective.
We have $(r,m)^{-1} = (-r2^{-m}, -m)$ in $\mathbb{Z}[1/2] \rtimes \mathbb{Z}$, and a sequence of $s$ group operations may lead to exponentially large or exponentially small values in the first component. Binary representation can cope with these values so there is no real need for power circuit\xspace{}s when dealing with the group operation, only.
We equip $\mathbb{Z}[1/2] \rtimes \mathbb{Z}$ with a partially defined \ei{swap operation}. For $(r,m) \in \mathbb{Z} \times \mathbb{Z} \subseteq \mathbb{Z}[1/2] \rtimes \mathbb{Z}$ we define $\s(r,m) = (m,r)$. This looks innocent, but note that a sequence of $2^{\mathcal{O}(n)}$ defined operations starting with $(1,0)$ may yield a pair $(0,\tau(n))$ where $\tau$ is the tower function. Indeed $\s(1,0) = (0,1) = (0,\tau(0))$ and \begin{equation}\label{swaptower} \s((0,\tau(n))(1,0)(0,-\tau(n)) = \s(\tau(n+1),0) = (0,\tau(n+1)). \end{equation}
However, we will show in section \ref{app}:
\begin{theorem}\label{wpsdpz} The Word Problem\xspace of the algebra $\sdpz$ with swapping is decidable in $\mathcal{O}(n^4)$. \end{theorem}
We use triples to denote elements in $\mathbb{Z}[1/2] \rtimes \mathbb{Z}$. A triple $[u,x,k]$ with $u,x,k\in \mathbb{Z}$ and $x \leq 0 \leq k$ denotes the pair $(u2^x,k+x)\in \sdpz$. For each element in $\sdpz$ there are infinitely many corresponding triples. Using the generators $a$ and $t$ of $\mathrm{\bf{BS}}(1,2)$ one can write: \begin{align*} [u,x,k] = (u2^x,k+x) &= (0,x)(u,k) \in \mathbb{Z}[1/2]\rtimes\mathbb{Z}\\ & = t^x a^u t^k \in \mathrm{\bf{BS}}(1,2)\text{ and}\\ [u,x,k] \cdot [v,y,\ell] &= [u 2^{-y} + v2^{k}, x+y, k + \ell] \end{align*}
In the following we use power circuit\xspace{}s with \ei{triple markings} for elements in $\mathbb{Z}[1/2] \rtimes \mathbb{Z}$. We consider $T = [U,X,K]$, where $U,X,K$ are markings in a power circuit\xspace with $\eps(U) = u$ and $\eps(X) =x \leq 0 \leq \eps(K)= k$; and we define $\eps(T)\in Z[1/2] \rtimes \mathbb{Z}$ to be the triple $\eps(T) = [u,x,k] = (u2^x,x+k)$.
\section{Solving the Word Problem\xspace in the Baums\-lag\xspace group}\label{wpg}
The Baums\-lag\xspace group $G_{(1,2)}$ is a one-relator group with two generators $a$ and $b$ and the defining relation $a^{a^b}=a^2$. (The notation $g^h$ means conjugation, here $g^h =h g h^{-1}$. Hence $a^{a^b}= b a b^{-1} a b a^{-1} b^{-1}$.) The group $G_{(1,2)}$ can be written as an HNN extension of $\mathrm{\bf{BS}}(1,2)\simeq\mathbb{Z}[1/2]\rtimes\mathbb{Z}$ with \ei{stable letter} $b$; and $\mathrm{\bf{BS}}(1,2)$ is an HNN extension of $\mathbb{Z}\simeq \gen{a}$ with \ei{stable letter} $t$: \begin{align*} \langle a,b\mid a^{a^b}=a^2\rangle &\simeq\langle a,t,b\mid a^t=a^2,a^b=t\rangle\cr &\simeq\mathrm{HNN}(\langle a,t\mid a^t=a^2\rangle,b,\langle a\rangle\simeq\langle t\rangle)\\ &\simeq\mathrm{HNN}\left(\mathrm{HNN}(\langle a\rangle,t,\langle a\rangle\simeq\langle a^2\rangle),b,\langle a\rangle\simeq\langle t\rangle\right) \end{align*}
Before the work of Myasnikov, Ushakov and Won (\cite{muw11bg}) $G_{(1,2)}$ had been a possible candidate for a one-relator group with an extremely hard (non-elementary) word problem in the worst case by the result of Gersten \cite{gersten91}. (Indeed, the tower function is visible as follows: Let $T(0) = t$ and $T(n+1) = b T(n) a T(n)^{-1}b ^{-1}$. Then $T(n) = t^{\tau(n)}$ by a translation of \refeq{swaptower}.) The purpose of this section is to improve the $\mathcal{O}(n^7)$ time-estimation of \cite{muw11bg} to cubic time. \refthm{wpbg} yields also the first practical algorithm to solve the Word Problem\xspace in the Baums\-lag\xspace group for a worst-case scenario\footnote{It is easy to design simple algorithms which perform extremely well on random inputs. But for all these algorithms fail on short instances, e.g. in showing $tT(6) = T(6)t$.}.
\begin{theorem}\label{wpbg} The Word Problem\xspace of the Baums\-lag\xspace group $G_{(1,2)}$ is decidable in time $\mathcal{O}(n^3)$. \end{theorem}
\begin{proof} We assume that the input is already in compressed form given by a sequence of letters $b^{\pm 1}$ and pairwise disjoint power circuit\xspace{}s each of them with a triple marking $[U,X,K]$ representing an element in $\sdpz$, which in turn encodes a word over $a^{\pm 1}$'s and $t^{\pm 1}$'s.
We use the following invariants: \begin{enumerate}[i)] \item $U,X,K$ have pairwise disjoint supports. \item $U$ is a source. \item All incoming arcs to $X\cup K$ have their origin in $U$. \item Arcs from $U$ to $X$ have the opposite sign of the corresponding node-sign in $X$. \end{enumerate}
These are clearly satisfied in case we start with a sequence of $a^{\pm 1}$'s, $t^{\pm 1}$'s, and $b^{\pm 1}$'s. The formula $[u,x,k] \cdot [v,y,\ell] = [u 2^{-y} + v2^{k}, x+y, k + \ell]$ allows to multiply elements in $\sdpz$ without destroying the invariants or increasing the total number of nodes in the power circuit\xspace{}s (the invariants make sure that cloning is not necessary). The total number of multiplications is bounded by $n$. Taking into account that there are at most $n^2$ arcs, we are within the time bound $\mathcal{O}(n^3)$.
Now we perform from left-to-right Britton re\-duc\-tion\xspace{}s, see \cite{LS01}. In terms of group generators this means to replace factors $b a^{s} b^{-1}$ by $t^s$ and $b^{-1} t^{s} b$ by $a^s$. Thus, if we see a subsequence $b [u,x,k] b^{-1}$, then we must check if $x+k=0$ and after that if $u2^x\in\mathbb{Z}$. If we see a subsequence $b^{-1} [u,x,k] b$, then we must check $u=0$. In the positive case we swap, in the other case we do nothing. Let us give the details: For a test we compute a tree representation of the circuit using \proc{MakeTree}\xspace which takes time $\mathcal{O}(n^2)$. After each test for a Britton re\-duc\-tion\xspace, the tree representation is deleted. There are two possibilities for necessary tests. \begin{enumerate}[1.)] \item $u = 0$. If yes, remove in the original power circuit\xspace the source $U$, this makes $X \cup K$ a source; replace $[u,x,k]$ by $[x+k, 0, 0]$. The invariants are satisfied. \item $x+k = 0$. If yes, check whether $u2^x \in \mathbb{Z}$. If yes, replace $[u,x,k]$ in the original power circuit\xspace by either $[0,u2^x,0]$ or $[0,0,u2^x]$ depending on whether $u2^x$ is negative or positive. We get $u2^x$ without increasing the number of nodes, since arcs from $U$ to $X$ have the opposite signs of the node-signs in $X$. Thus, if $E$ has been the set of arcs before the test, it is switched to $U \times X \setminus E$ after the test. The new marking for $u2^x$ is a source and does not introduce any cycle, because its support is still the support of the source $U$. \end{enumerate} It is easy to see that computing a Britton re\-duc\-tion\xspace on an input sequence of size $n$, we need at most $2n$ tests and at most $n$ of them are successful. Hence we are still within the time bound $\mathcal{O}(n^3)$.
At the end we have computed in time $\mathcal{O}(n^3)$ a Britton-re\-du\-ced\xspace normal form where inner parts (i.e. the ones not involving $b^{\pm 1}$'s) are given as disjoint power circuit\xspace{}s. The result follows straightforwardly. \qed \end{proof}
\section{Higman groups}
The Higman group $H_q$ has a finite presentation with generators $a_1 \lds a_q$ and defining relations $a_p a_{p-1} a_p^{-1} = a_{p-1}^2$ for all $p \in \mathbb{Z}/ q \mathbb{Z}$. From now on we interpret indices $p$ for generators $a_p$ as elements of $\mathbb{Z}/q\mathbb{Z}$. In particular, $a_q = a_0$ and one of the defining relations says $a_1 a_{q} a_1^{-1} = a_{q}^2$. It is known \cite{serre80} that $H_q$ is trivial for $q \leq 3$ and infinite for $q \geq 4$. Hence, in the following we assume $q \geq 4$. The group $H_4$ was the first example of a finitely generated group where all finite quotient groups are trivial. It has been another potential natural candidate for a group with an extremely hard (non-elementary) word problem in the worst case. Indeed, define: \begin{align*} w(p,0) &= a_p &\text{ for } p \in \mathbb{Z} / q \mathbb{Z}\\ w(p-1,i+1) &= w(p,i)a_{p-1} w(p,i)^{-1}&\text{ for } i \in \mathbb{N}\text{ and } \in \mathbb{Z} / q \mathbb{Z} \end{align*} By induction, $w(p,n) = a_p^{\tau(n)}\in H_q$, where $\tau(n)$ is the $n$-th value of the tower function, but the length of the words $w(p,n)$ is $2^{n+1} -1$, only. Hence there is a "tower-sized gap" between input length and length of a canonical normal form.\footnote{This can be made more precise (and rigorous) by saying that the \ei{Dehn-function} of $H_4$ grows like a tower function (\cite{bridson10})}
For $i,j\in \mathbb{N}$ with $i \leq j$ we define the group $G_{i \lds j}$ by the generators $a_i, a_{i+1},$ $ \ldots, a_j \in \oneset{a_1 \lds a_q}$, and defining relations $a_p a_{p-1} a_p^{-1} = a_{p-1}^2$ for all $i<p\le j$. Note that each $G_{i} \simeq \mathbb{Z}$ is the infinite cyclic group. The group $G_{1 \cdots q}$ is not $H_q$ because the relation $a_1 a_{q} a_1^{-1} = a_{q}^2$ is missing, but $H_q$ is a (proper) quotient of $G_{1 \cdots q}$. The groups $G_{i, i+1}$ are, by the very definition, isomorphic to the Baumslag-Solitar group $\mathrm{\bf{BS}}(1,2)$, hence $G_{i, i+1}\simeq \mathbb{Z}[1/2] \rtimes \mathbb{Z}$. It is also clear that $G_{i \lds j+1}\simeq G_{i \lds j}\ast_{G_j}G_{j,j+1} $ for $j-i < q$. Thus, $G_{123} \simeq G_{12}\ast_{G_2}G_{2,3}$ and $G_{341} \simeq G_{34}\ast_{G_4}G_{41}$.
For simplicity we deal with $q=4$ only. The free group $F_{13}$, generated by $a_1$ and $a_3$ is a subgroup of $G_{123}$ as well as a subgroup of $G_{341}$, see e.g. \cite{serre80}. Thus we can build the amalgamated product $G_{123}\ast_{F_{13}}G_{341}$ and a straightforward calculation shows $$H_4 \simeq G_{123}\ast_{F_{13}}G_{341}.$$ This isomorphism yields a direct proof that $H_4$ is an infinite group, see \cite{serre80}. In the following we use the following well-known facts about amalgamated products, see \cite{LS01,serre80,ddm10}. The idea is to calculate an alternating sequence of group elements {}from $G_{123}$ and $G_{341}$. The sequence can be shortened, only if one factor appears to be in the subgroup $F_{13}$. In this case we swap the factor from $G_{123}$ to $G_{341}$ and vice versa. By abuse of language we call this procedure again a \ei{Britton re\-duc\-tion\xspace.} (This is perhaps no standard notation in combinatorial group theory, but it conveniently unifies the same phenomenon in amalgamated products and HNN-extensions; and the notion of Britton re\-duc\-tion\xspace generalizes nicely to fundamental groups of graphs of groups.) Elements in the groups $G_{i,i+1}$ are represented by triple markings $T=[U,X,K]$ in some power circuit\xspace{}. In order to remember that we evaluate $T$ in the group $G_{i, i+1}$, we give each $T$ a \ei{type} $(i,i+1)$, which is denoted as a subscript. For $\eps(T) = [u,x,k]$ we obtain: \begin{align*} \eps(T_{(i, i+1)}) &= a_{i+1}^x a_i^{u}a_{i+1}^{k} &\in G_{i, i+1}\\
&=a_i^{u2^x}a_{i+1}^{x+k} &\text{ if $u2^x \in \mathbb{Z}$} \end{align*}
The following basic operations are defined for all indices $i$, $i+1$, $i+2$ and $i\in \mathbb{Z}/4 \mathbb{Z}$, but for better readability we just use indices 1, 2, and 3. \begin{itemize} \item Multiplication: \begin{equation}\label{eqmul} [u,x,k]_{(1,2)} \cdot [v,y,\ell]_{(1,2)} = [u 2^{-y} + v2^{k}, x+y, k + \ell]_{(1,2)}. \end{equation} \item Swapping from $(1,2)$ to $(2,3)$: \begin{equation}\label{eqswap} [0,x,k]_{(1,2)} = [x+k,0,0]_{(2,3)}. \end{equation} \item Swapping from $(2,3)$ to $(1,2)$: \begin{equation}\label{eqpswap} [z,0,0]_{(2,3)} = [0,0,z]_{(1,2)} \text{ for } z \geq 0. \end{equation} \begin{equation}\label{eqnswap} [z,0,0]_{(2,3)} = [0,z,0]_{(1,2)} \text{ for } z < 0. \end{equation} \item Splitting: \begin{equation}\label{eq12split} [u,x,k]_{(1,2)} = [u 2^x,0,0]_{(1,2)}\cdot [0,x,k]_{(1,2)} \text{ for } u 2^x \in \mathbb{Z}. \end{equation} \begin{equation}\label{eq23split} [u,x,k]_{(2,3)} = [0,x,k]_{(2,3)}\cdot [u 2^{-k},0,0]_{(2,3)} \text{ for } u 2^{-k} \in \mathbb{Z}. \end{equation}
\end{itemize}
From now on we work with a single power circuit\xspace $\Pi$ together with a sequence $T_j$ ($j \in J$) of triple markings of various types. This is given as a tuple $\mathcal{T} = (\GG, \delta; (T_j)_{j \in J})$. We allow splitting operations only in combination with a multiplication, thus we never increase the number of triple marking\xspace{}s inside $\mathcal{T}$. A tuple $\mathcal{T} = (\GG, \delta; (T_j)_{j \in J})$, where $(\GG,\delta)$ is in tree representation is called a \ei{main data structure\xspace}. We keep $\mathcal{T}$ as a main data structure\xspace by doing addition and multiplication by powers of $2$ using clones and calling \proc{ExtendTree}\xspace on these after each basic operation.
\begin{definition}\label{def:egin} The \ei{weight} $\omega(T)$ of a triple marking\xspace $T = [U,X,K]$ is defined as $$\omega(T) = \abs{\sigma(U)} + \abs{\sigma(X)} + \abs{\sigma(K)}.$$
The \ei{weight} $\omega(\mathcal{T})$ of a main data structure\xspace $\mathcal{T}$ is defined as $\omega(\mathcal{T})= \sum_{j \in J} \omega({T_j}).$
Its \ei{size}\footnote{ The definition is justified, since
we ensure $\abs{J} + \omega(\mathcal{T})\leq \Abs{\mathcal{T}}$ whenever arguing about $\Abs{\mathcal{T}}$.} $\Abs{\mathcal{T}}$ is defined by $\Abs{\mathcal{T}} = \abs \Gamma$. \end{definition}
\begin{proposition}\label{gunnar} Let $\mathcal{T} = (\GG, \delta; (T_j)_{j \in J})$ be a main data structure\xspace of size at most $m$, weight at most $w$ (and with $\abs{J} +w\leq m$). The following assertions hold. \begin{enumerate}[i)] \item No basic operation increases the weight of $\mathcal{T}$. \item Each basic operation increases the size $\Abs{\mathcal{T}}$ by $\mathcal{O}(w+(\ensuremath{\mathrm{c}}(\Pi)-\ensuremath{\mathrm{c}}(\Pi)))$. \item Each basic operation takes amortized time $\mathcal{O}(mw)$. \item\label{gunnar:total} A sequence of $s$ basic operations takes time $\mathcal{O}(smw+m^2)$ and the size of $\mathcal{T}$ remains bounded by $\mathcal{O}(m+sw)$. \end{enumerate} \end{proposition}
\begin{proof} Applying a basic operation means replacing the left-hand side of the equation by the right-hand side, thus forgetting any markings of the replaced triple(s). We can do the necessary tests, because we have a tree representation. For an operation we clone the involved markings, but this does not increase the weight. Note that there is time enough to create the clones with all their outgoing arcs. This yields the increase in the size by $\mathcal{O}(w)$. With the new clones we can perform the operations by using the algorithms described in section \ref{PCs} on the graph representation of the circuit. We regain the main data structure\xspace by calling \proc{ExtendTree}\xspace which integrates the modified clones into the tree representation.
In order to get \ref{gunnar:total}) we observe that the initial number of maximal chains is $m$ and there are at most $\mathcal{O}(w)$ new ones created in each basic operation. Hence the total increase in size is $\mathcal{O}(sw)$ and the difference in potential is at most $m(m+sw)$. The time bound follows. \qed\end{proof}
\subsection{Solving the Word Problem\xspace in Higman's group}\label{wph4}
\begin{theorem}\label{wph} The Word Problem\xspace of $H_4$ can be solved in time $\mathcal{O}(n^6)$. \end{theorem}
The rest of this section is devoted to the proof of \refthm{wph}. For solving the word problem in the Higman group $H_4$ the traditional input is a word over generators $a_{p}^{\pm 1}$. We solve a slightly more general problem by assuming that the input consists of a single power circuit\xspace $\Pi = (\Gamma, \delta)$ together with a sequence of $s$ triple markings of various types. Each triple marking\xspace $[U,X,K]_{(p,p+1)}$ corresponds to $a_{p+1}^{\eps(X)}a_{p}^{\eps(U)}a_{p+1}^{\eps(K)} \in H_4$.
Let us fix $w$ to be the total weight of $\mathcal{T}=(\GG, \delta; (T_j)_{1 \leq j \leq s})$. For simplicity we assume $s \leq w$ and that $w$ and sizes of clones are bounded by $\Abs{T}=\abs{\GG}$. (This is actually not necessary, but it simplifies some bookkeeping.) Having $s \leq w \leq n \in \mathcal{O}(w)$, we can think of $n = \abs\GG$ as our input size. We transform the input $\mathcal{T} = (\GG, \delta; (T_j)_{1 \leq j \leq s})$ into a main data structure\xspace by a call of \proc{MakeTree}\xspace.
During the procedure $\abs\GG$ increases, but the number of triple marking\xspace{}s remains bounded by $s$ and the weight remains bounded by $w$.
In order to achieve our main result we show how to solve the word problem with $\mathcal{O}(s^2)$ basic operations on the main data structure\xspace $\mathcal{T}$. Assume we have shown this. Then, by \refprop{gunnar}, the final size will be bounded by $m \in \mathcal{O}(s^2w)$; and the time for all basic operations is therefore $\mathcal{O}(s^4w^2)\subseteq\mathcal{O}(n^6)$.
We collect sequences of triple marking\xspace{}s of type $(1,2)$ and $(2,3)$ in \ei{intervals} $\mathcal{L}$, which in turn receive type $(1,2,3)$; and we collect triple marking\xspace{}s of type $(3,4)$ and $(4,1)$ in intervals of type $(3,4,1)$. Each interval has (as a sequence of triple marking\xspace{}s) a semantics $\varepsilon(\mathcal{L})$ which is a group element either in $G_{123}$ or in $G_{341}$ depending on the type of $\mathcal{L}$. Thus, it makes sense to ask whether $\varepsilon(\mathcal{L}) \in F_{13}$. These tests are crucial and dominate the runtime of the algorithm.
Now the sequence $(T_j)_{1 \leq j \leq s}$ of triple markings appears as a sequence of intervals: $$(\mathcal{L}_1 \lds \mathcal{L}_{f}; \mathcal{L}_{f+1} \lds \mathcal{L}_t).$$ We introduce a separator ";" dividing the list in two parts.
The following invariants are kept up: \begin{enumerate}[i)] \item All $\mathcal{L}_1 \lds \mathcal{L}_{f}$ satisfy $\varepsilon(\mathcal{L}_i)\notin F_{13}$. In particular, these intervals are not empty and they represent non-trivial group elements in $(G_{123}\cup G_{341}) \setminus F_{13}$. \item The types of intervals left of the separator are alternating. \end{enumerate}
In the beginning each interval consists of exactly one triple marking\xspace, thus $f=0$ and $t=s$. The algorithm will stop either with $1 \leq f = t$ or with $f=0$ and $t=1$.
Now we describe how to move forward: Assume first $f=0$. (Thus, $t>1$.) If $\varepsilon(\mathcal{L}_{1}) \notin F_{13}$, then move the separator to the right, i.e. we obtain $f=1$. If $\varepsilon(\mathcal{L}_{1}) \in F_{13}$, then, after possibly swapping $\mathcal{L}_{1}$, we join the intervals $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ into one new interval. In this case we still have $f=0$, but $t$ decreases by 1.
From now on we may assume that $0<f < t$. If $\mathcal{L}_{f}$ and $\mathcal{L}_{f+1}$ have the same type, then append $\mathcal{L}_{f+1}$ to $\mathcal{L}_{f}$, and move the separator to the left of $\mathcal{L}_f$. Thus, the values $f$ and $t$ decrease by 1.
If $\mathcal{L}_{f}$ and $\mathcal{L}_{f+1}$ have different types, then we test whether or not $\varepsilon(\mathcal{L}_{f+1}) \in F_{13}$. If $\varepsilon(\mathcal{L}_{f+1}) \notin F_{13}$, then move the separator to the right, i.e. $t-f$ decreases by $1$. If $\varepsilon(\mathcal{L}_{f+1}) \in F_{13}$, then we swap $\mathcal{L}_{f+1}$ and join the intervals $\mathcal{L}_{f}$ and $\mathcal{L}_{f+1}$ into one new interval. Since we do not know whether the new interval belongs to $F_{13}$, we put the separator in front of it, decreasing both $f$ and $t$ by 1.
We have to give an interpretation of the output of this algorithm. Consider the case that we terminate with $1 \leq f = t$. Then $\varepsilon(\mathcal{L}_1) \cdots \varepsilon( \mathcal{L}_{t}) \in H_4$ is a Britton-re\-du\-ced\xspace sequence in the amalgamated product. It represents a non-trivial group element, because $t\geq 1$.
In the other case we terminate with $f=0$ and $t=1$. We will make sure that the test "$\varepsilon(\mathcal{L}) \in F_{13}$?" can as a by-product also answer the question whether or not $\varepsilon(\mathcal{L})$ is the trivial group element. If we do so, one more test on $(\mathcal{L}_1)$ yields the answer we need.
Now, we analyze the time complexity. Termination is clear once we have explained how to implement a test "$\varepsilon(\mathcal{L}) \in F_{13}$?". Actually, it is obvious that the number of these tests is bounded by $2s$. Thus, it enough to prove the following claim.
\begin{lemma}\label{lem:test13} Every test "$\varepsilon(\mathcal{L}) \in F_{13}$?" can be realized with $\mathcal{O}(s)$ basic operations in the main data structure\xspace $\mathcal{T}$. The test yields either "no" or it says "yes" with the additional information whether or not $\varepsilon(\mathcal{L})$ is the trivial group element. Moreover, in the "yes" case we can also swap the type of $\mathcal{L}$ within the same bound on basic operations. \end{lemma}
\begin{proof} Let us assume that $\mathcal{L}$ is of type $(1,2,3)$, i.e., it contains only triples of types $(1,2)$ and $(2,3)$. Let $s$ be the length of $\mathcal{L}$.
The group $G_{123}$ is an amalgamated product where $F_{13}$ is a free subgroup of rank 2, see \cite{serre80} for a proof. In a first round we create a sequence of triple marking\xspace{}s $$(T_1 \lds T_t)$$ with $t \leq s$ such that for $1 \leq i < t$ the type of $T_i$ is $(1,2)$ if and only if\xspace the type of $T_{i+1}$ is $(2,3)$. We can do so by $s-t$ basic multiplications from left-to right without changing the semantics of $g = \varepsilon(T_1) \cdots \varepsilon(T_t)\in G_{123}$.
Next, we make this sequence Britton-re\-du\-ced\xspace. Again, we scan from left to right. If we are at $T=T_i$ with value $[u,x,k]$ we have to check that either $[u,x,k]_{(1,2)} = (0,z)\in \sdpz$ or $[u,x,k]_{(2,3)} = (z,0)\in \sdpz$ for some integer $z\in \mathbb{Z}$.
For the type $(1,2)$ we have $[u,x,k]_{(1,2)}= (0,z)$ if and only if\xspace $u=0$, which in a tree representation means that the support of the marking for $u$ is empty. Hence this test is trivial. If the test is positive, we can replace $[u,x,k]_{(1,2)}$ by $[0,x,k]_{(1,2)}$ and we perform a swap to type $(2,3)$. If $t>1$ we can recursively perform multiplications with its neighbors, thereby decreasing the value $t$.
For the type $(2,3)$ we have $[u,x,k]_{(2,3)}= (z,0)$ if and only if\xspace both $k+x=0$ and $u2^x \in \mathbb{Z}$. Tests are possible in linear time and if successful, we continue as in the precedent case.
The final steps are more subtle. Let $\eps(T_j) = g_j \in G_{12} \cup G_{23}$. Recall that $(g_1 \lds g_t)$ is already a Britton-re\-du\-ced\xspace sequence. We have $g_1 \cdots g_t \in F_{13}$ if and only if\xspace there is a sequence $(h_0, h_1 \lds h_t)$ with the following properties: \begin{enumerate}[i)] \item $h_0 = h_t = 1$ and $h_j \in G_2$ for all $0\leq j \leq t$. \item $h_{j-1} g_j = g'_j h_{j}$ with $g'_j \in G_{1} \cup G_{3}$ for all $1\leq j \leq t $. \end{enumerate}
Assume that such a sequence $(h_0, h_1 \lds h_t)$ exists. Then we have $g'_j \in G_1$ if and only if\xspace $g_j \in G_{12}$. Moreover, whenever $gh= g'h'\in G_{123}$ with $g,g' \in G_1 \cup G_3$ and $h, h' \in G_2$, then $g = g'$ and $h=h'$. This follows because $ g'^{-1}g =h'h^{-1} \in F_{13}\cap G_2 = \oneset{1}$. Thus, the product $h_{j-1} g_j$ uniquely defines $g'_j \in G_{1} \cup G_{3}$ and $ h_{j}\in G_2$, because $h_0 = 1$ is fixed.
The invariant during a computation from left to right is that $\varepsilon(T_j) = h_{j-1} g_j$. We obtain $\varepsilon(T_j) = g_{j}' h_j$ by a basic splitting. If no splitting is possible we know that $g \notin F_{13}$ and we can stop. If however a splitting is possible, then we have two cases. If $j$ is the last index ($j=t$), then, in addition, we must have $h_j=1$. We can test this. If the test fails, we stop with $g\notin F_{13}$. If we are not at the last index we perform a swap. We split, then swap the right hand factor and multiply it with the next triple marking\xspace, which has the correct type to do so. As our sequence has been Britton-re\-du\-ced\xspace the total number of triple marking\xspace{}s remains constant. There can be no cancelations at this point. Thus, the test gives us the answer to "$\varepsilon(\mathcal{L})\in F_{13}$?" using $\mathcal{O}(s)$ basic operations. In the case $\varepsilon(\mathcal{L}) \in F_{13}$ we still need to know whether $\varepsilon(\mathcal{L}) = 1 \in G_{123}$. For $t> 1$ the answer is "no". It remains to deal with $t=1$. But a test whether $\varepsilon([u,x,k])=1 $ just means to test both $u=0$ and $x+k=0$.
Now, assume we obtain a "yes" answer and we know $\varepsilon(\mathcal{L})\in F_{13}$. We do the swapping of types from left to right by using only the left factor in a splitting. These are additional $s$ basic operations, hence the total number of $\mathcal{O}(s)$ did not increase. \qed \end{proof}
\section{Conclusion and future research}\label{conclusion} The Word Problem\xspace is a fundamental problem in algorithmic group theory. In some sense "almost all" finitely presented groups are hyperbolic and satisfy a "small cancelation" property, so the Word Problem\xspace is solvable in linear time! For hyperbolic groups there are also efficient parallel algorithms and the Word Problem\xspace is in $\mathbf{NC^2}$, see \cite{cai92stoc}. On the other hand, for many naturally defined groups little is known. Among one-relator groups the Baums\-lag\xspace group $G_{(1,2)}$ was supposed to have the hardest Word Problem\xspace. But we have seen that it can be solved in cubic time. The method generalizes to the higher Baums\-lag\xspace groups $G_{(m,n)}$ in case that $m$ divides $n$, but this requires more "power circuit\xspace machinery" and has not worked out in full details yet, see \cite{muw11bg}. The situation for $G_{(2,3)}$ is open and related to questions in number theory. The Higman groups $H_q$ belong to another family of naturally appearing groups where the Word Problem\xspace was expected to be non-polynomial. We have seen that the Word Problem\xspace in $H_4$ is in $\mathcal{O}(n^6)$. It easy to see that our methods show that the Word Problem\xspace in $H_q$ is always in {\bf P}, but to date the exact time complexity has not been analyzed for $q>4$.
Baums\-lag\xspace and Higman groups are built up via simple HNN extensions and amalgamated products. Many algorithmic problems are open for such constructions, for advances about theories of HNN-extensions and amalgamated products we refer to \cite{LohSen06}.
Another interesting open problem concerns the Word Problem\xspace in \ei{Hydra} groups. Doubled hydra groups have Ackermannian Dehn functions \cite{Riley2010arXiv1002.1945D}, but still it is possible that their Word Problem\xspace is solvable in polynomial time.
\addcontentsline{toc}{section}{Bibliography}
\newcommand{\Ju}{Ju}\newcommand{\Ph}{Ph}\newcommand{\Th}{Th}\newcommand{\Ch}{C h}\newcommand{\Yu}{Yu}\newcommand{\Zh}{Zh}
\section*{Appendix}\label{app}
\subsection*{Reduction of power circuit\xspace{}s}
In this section we give a full proof of \refthm{extendtree}. We start with the observation that due to compactness of the markings in a tree representation, the leaves are automatically ordered by $\eps$-value. The leftmost has the smallest and the rightmost has the largest $\eps$-value. This easy calculation (essentially an argument about binary sums) is left to the reader.
Next, we establish some operations on tree representations.
\begin{lemma}{(Insertion of a new node)}\label{lem:insertnode} Let $\Pi=(\GG, \delta)$ be a power circuit\xspace in tree representation and $M$ a marking in $\Pi$ given as a leaf in the tree. Then we can in amortized time $\mathcal{O}(\abs{\GG})$ insert a new node $P$ into the circuit with $\LL_P=M$. \end{lemma}
\begin{proof} If a node $P$ with $\eps(P)=2^{\eps(M)}$ already exists, we abort immediately. Otherwise the index of the leaf corresponding to the (compact) marking $M$ inside the list of leaves tells us the position of $P$ in the sorting of $\GG$. (Actually, we have to count the number of leaves of type $\LL_Q$ ($Q\in\GG$) that are left of $M$.)
Next, we need to update the bit vector $b$. This is achieved by the procedure described in \refprop{testintree}, which tells us whether $\eps(M)+1=\eps(\LL_Q)$ where $Q$ is the node succeeding $P$ in the ordering of $\GG$.
Finally, we have to "stretch" the tree defined by $\DD$ by inserting a new level corresponding to the new node $P$. All the edges on that level have to be labeled by "0", as no marking uses the newly created node yet. This can be done in linear time using the lists of nodes we keep for each level.
Note that this may increase the potential by $\abs{\GG}$, since both $\abs{\GG}$ and the number of chains might grow by one. This adds $\abs{\GG}$ to the amortized time, which is captured by the $\mathcal{O}$-notation. \qed\end{proof}
\begin{lemma}{(Incrementation of a marking)}\label{lem:incmarking} Given a marking $M$ in a tree representation of $\Pi=(\GG,\delta)$ one can generate in $\mathcal{O}(\abs{\sigma(M)})$ time a marking $M'$ with $\eps(M')=\eps(M)+1$. \end{lemma}
\begin{proof} If $\GG$ is empty, the claim is obvious. Otherwise let $P$ be the unique node in $\GG$ with $\eps(P)=1$. If $M(P)\neq +1$, we increment $M(P)$ by one and are done. Otherwise, we look for a node $P'$ with $\eps(P')=2$. If it doesn't exist (in which case $\GG=\{P\}$ and $\eps(M)=1$), we create it and put $M(P)=0$ and $M(P')=+1$. If $P'$ does exist, then $M(P')=0$ due to compactness. Again, put $M(P)=0$ and $M(P')=+1$.
Note that in the last case the newly created marking is not necessarily compact and therefore cannot be inserted immediately into the tree representation as a leaf. We will deal with this in the next lemma. \qed\end{proof}
\begin{lemma}{(Making a marking compact)}\label{lem:compactify} Let $\Pi=(\GG,\delta)$ be a power circuit\xspace in tree representation and $M$ be a marking in $\Pi$ (not yet a leaf and in particular not compact; e.g. given as a list of signed pointer to nodes). Assume that for each node $P\in\sigma(M)$ the last node $T$ in the longest chain starting in $P$ is not marked by $M$. Then $M$ can be made compact in time $\mathcal{O}(\abs{\sigma(M)})$ (and after that integrated into $\Pi$ as a leaf in time $\mathcal{O}(\abs{\GG})$) without changing the circuit and without increasing the weight of $M$. \end{lemma}
\begin{proof} We look at the nodes $P\in\sigma(M)$ in ascending order (w.r.t. their value). There are essentially two ways for $M$ not to be compact at a point $P$: Assume that $M(P)=+1$ ($-1$ is similar). \begin{enumerate}[1.)] \item $P$ is the first node in a chain length $2$ which $M$ labels $(+1,-1)$. Replace it by $(-1,0)$. \item $P$ is the first node of a chain $(P=P_1,P_2\lds P_k,P_{k+1})$ labeled $M(P_i)=+1$ ($1\le i\le k$) and $M(P_{k+1})\neq +1$. Note that by assumption $P_{k+1}$ exists. Replace the labels by $(-1,0\lds 0,+1)$. We might need to repeat this (if there is a node $P_{k+2}$ with $\eps(P_{k+2})=2\eps(P_{k+1})$ and $M(P_{k+2})=+1$) but this ultimately stops at $T$. \end{enumerate} \qed\end{proof}
Often we need to increment a leaf marking by one and make it compact. In order to have the necessary nodes, we introduce the following concept:
\begin{definition}\label{def:joker} Let $\Pi= (\GG, \delta)$ be a power circuit in tree representation. A node $J$ is called a \ei{joker}, if it is the last in a maximal chain starting at the unique node with value $1$ and $J$ is not used in any leaf marking. \end{definition}
\begin{lemma}\label{lem:createjoker} Let $\Pi=(\GG,\delta)$ be a power circuit in tree representation. Then we can in amortized time $\mathcal{O}(\abs{\GG})$ insert a joker into the circuit. \qed\end{lemma}
\begin{proof} Start at the node $P_0$ with $\eps(P_0)=1$. Using the bit vector, find the first "gap" in the chain starting at $P_0$, i.e., the largest $i$ such that $P_0\lds P_{n-1}$ exist with $\eps(P_i)=2^i$. Keep the number $n$ in binary notation. Like in \reflem{lem:compactify} compute a compact representation of $n$. Note that we are dealing with ordinary numbers here, not circuits! Use the compact representation of $n$ to create $P_n$ with $\eps(P_n)=2^n$. Check whether there is a node $P_{n+1}$ and adjust the bit vector. If yes, $P_n$ linked two maximal chains, so in amortized analysis we don't have to account for the $\mathcal{O}(\abs{\GG})$ time used so far. Repeat the process until we create a node that is the end of a maximal chain. This is the joker. \qed\end{proof}
Now we are ready to prove \refthm{extendtree}.
\begin{proofof}{\refthm{extendtree}}{ Let $n=\abs{\GG}+\abs{U}$. We may assume $n\ge 1$.
Perform a topological sorting of $U$, i.e. find an enumeration $U =\oneset{Q(0)\lds Q(\abs{U}-1)}$ such that there are no arcs from any $Q(i)$ to $Q(j)$ when $i\le j$. Since $\Pi$ is a dag, a topological ordering of $U$ exists and it can be found in time $\mathcal{O}(n\abs{U})$, see e.g. \cite{CLRS09}. The nodes of $U$ will be moved to $\GG$ in ascending topological order, so that all the time $\GG$ remains a circuit, i.e., there are no arcs from $\GG$ to $U$.
Let $M$ be any one of the markings $M_j$ ($j=1\lds m$) or $\LL_P$ ($P\in U$). While the nodes of $U$ are being moved to $\GG$, there may be times when the support of $M$ is partly in $\GG$ and partly still in $U$. Later it will be completely contained in $\GG$. We will maintain the following invariants: \begin{enumerate}[i)] \item Any marking whose support is completely contained in $\GG$ is represented as a leaf (and thus compact). \item\label{et:topinv} For all other markings $M$ and all nodes $P\in\sigma(M)\cap\GG$ there is a chain starting at $P$ and ending at a node $T\not\in\sigma(M)$ such that there is node node with double the value of $\eps(T)$ (i.e. the chain cannot be prolonged at the top end). \end{enumerate}
We now describe how to do the moving of the topologically smallest node $Q$ of $U$.
We have $\LL_Q\subseteq\GG$, so by the invariant, this is a leaf. Hence it is possible to test whether $\eps(\LL_Q)<0$ using \refprop{testintree}. If this is the case, $\Pi$ is not a power circuit\xspace and we stop with the output "no". From now on we assume $\eps(Q)\ge 1$.
\begin{enumerate}[1.)] \item\emph{Insert a new joker}\\ See \reflem{lem:createjoker}. Now we don't have to worry about incrementing compact markings anymore. \item\emph{Find a replacement $P$ for $Q$ in $\GG$}\\ Check whether there is a node $P\in\GG$ with value $\eps(P)=\eps(Q)$. If not, use \reflem{lem:insertnode} to create it taking the marking $\LL_Q$ as $\LL_{P}$. This takes amortized time $\mathcal{O}(\abs{\GG})$. \item\emph{Adapt markings using $Q$:}\\ Using the bit vector, go up the chain in $\GG$ starting at $P$. Prolong the chain at the top by creating a new node $P'$ (use \reflem{lem:incmarking} on the successor marking of the last node of the chain, insert it into the tree via \reflem{lem:compactify} and use it as a successor marking for creating the new node $P'$ with \reflem{lem:insertnode}. The time needed is $\mathcal{O}(\abs{\GG})$. \\ Go through all markings $M$ ($\LL_{Q'}$ for $Q'\in U$ and $M_j$ for $j=1\lds m$) that have $Q\in\sigma(M)$. Replace $Q$ by $P$ in $M$. If this leads to a double marking of $P$ by $M$, replace those by the next node in the chain. Again, that node might become doubly marked by $M$, so repeat this. This stops at the latest at $P'$ which is new and thus unmarked $M$. For each of these steps, the support of $M$ decreased by one, so the total time (for all $Q\in U$) is bounded by the sum of the sizes of all supports, i.e., $n\cdot\abs{U}$ for successor markings and $\abs{\sigma(M_1)}+\cdots+\abs{\sigma(M_m)}$ for the markings $M_j$ ($j=1\lds m$). Now $Q$ is not part of any marking anymore and can be deleted. \item\emph{Make markings compact:}\\ If $Q$ was the last node of a marking $M$ to be moved from $U$ to $\GG$, we have to make $M$ compact and create a leaf in the tree. This is done by using \reflem{lem:compactify}. Note that we have the invariant \ref{et:topinv}). We need time $\mathcal{O}(\abs{\sigma(M)}+n)$, which over the whole procedure sums up to $\mathcal{O}(n\cdot\abs{U})$ for successor markings and $\mathcal{O}(\abs{\sigma(M_1)}+\cdots+\abs{\sigma(M_m)}+m\cdot n)$ for the markings $M_j$ ($j=1\lds m$). \item\emph{Make room for later compactification:}\\ Start at $P'$ and create a new node $T$ that has value $\eps(T)=2\eps(P')$. Check whether there is a node with value $2\eps(T)$. Is yes, the creation of $T$ has linked two maximal chains, thus decreasing the potential by $\abs{\GG}$. This pays for the $\mathcal{O}(\abs{\GG})$ time needed for creating $T$ and the check. Repeat this until we create a $T$ that has no node with double the value of $T$. Note that this takes only amortized time $\mathcal{O}(\abs{\GG})$. \end{enumerate} }\end{proofof}
\subsection*{A more detailed look at power circuit\xspace{}s}
\begin{lemma}\label{lem:integervalues} The following assertions are equivalent: \begin{enumerate}[1.)] \item $\eps(P)\in 2^{\mathbb{N}}$ for all nodes $P$, \item $\eps(\LL_P)\geq 0$ for all nodes $P$, \item $\eps(M)\in \mathbb{Z}$ for all markings $M$. \end{enumerate} \end{lemma}
\begin{proof} Choose some node $P$ without incoming arcs which exists because $(\GG,\delta)$ defines a dag. The assertions are equivalent on $\GG\setminus\oneset{P}$ by induction. The result now follows easily. \end{proof}
\begin{proofof}{\refcor{PCtest}}{ The procedure \proc{MakeTree}\xspace uses \proc{ExtendTree}\xspace which detects if $\eps(P)\notin 2^\mathbb{N}$. The result follows from \reflem{lem:integervalues}. }\end{proofof}
\subsection*{The Word Problem\xspace of the algebra $\sdpz$ with swapping}
\begin{proofof}{\refthm{wpsdpz}}{ The proof is almost the same as the proof of \refthm{wpbg} which has been given above. Hence we focus on the differences in the proof. The input is given by a sequence of pairwise disjoint power circuit\xspace{}s each of them with a triple marking $[U,X,K]$ representing an element in $\sdpz$. We use only the invariant that $U,X,K$ have pairwise disjoint supports. Swapping of $[u,x,k]$ is possible, if $z= u2^x \in \mathbb{Z}$ and the result is either $[x+k,z,0]$ or $[x+k,0,z]$ depending on the sign of $z$. In order to realize a marking for $z$ we clone $U$. This increases the size by $\abs{\sigma(U)}$. At the end the size of the power circuit\xspace is quadratic in the input. This yields $\mathcal{O}(n^4)$ time. }\end{proofof}
\subsection*{Dehn functions}\label{sec:dehn} No result about Dehn functions is used in our paper. However, for convenience of the interested reader we recall the definition of a Dehn function as given by Wikipedia. Let $G$ be given by a finite generating set $X$ with a finite defining set of relations $R$. Let $F(X)$ be the free group with basis $X$ and let $w \in F(X)$ be a relation in $G$, that is, a freely-reduced word such that $w = 1$ in $G$. Note that this is equivalent to saying that is, $w$ belongs to the normal closure of $R$ in $F(X)$. Hence we can write $w$ as a sequence of $m$ words $xr x^{-1}$ with $r \in R^{\pm 1}$ and $x \in F(X)$. The \ei{area} of $w$, denoted $\mathrm{Area}(w)$, is the smallest $m\geq 0$ such that there exists such a representation for $w$ as the product in $F(X)$ of $m$ conjugates of elements of $R^{\pm 1}$ . Then the Dehn function of a finite presentation $G = \gen{X\mid R}$ is defined as $$\mathrm{Dehn}(n) = \max\set{\mathrm{Area}(w)}{ w= 1 \in G, \; \abs w \leq n, \text{ and $w$ is freely-reduced}}.$$ Two different finite presentations of the same group are equivalent with respect to \ei{domination}. Consequently, for a finitely presented group the growth type of its Dehn function does not depend on the choice of a finite presentation for that group.
A function $f : \mathbb{N} \to \mathbb{N}$ is dominated by $g: \mathbb{N} \to \mathbb{N}$, if there exists $c \geq 1$ such that $f(n) \leq c g(cn +c ) + cn + c$ for all $n \in \mathbb{N}$.
\end{document} | arXiv |
Applicability of 9Be global optical potential to description of 8,10,11B elastic scattering
Yong-Li Xu 1,, ,
Yin-Lu Han 2,, ,
Hai-Ying Liang 2 ,
Zhen-Dong Wu 2 ,
Hai-Rui Guo 3 ,
Chong-Hai Cai 4
College of Physics and Electronic Science, Shanxi Datong University, Datong 037009, China
Key Laboratory of Nuclear Data, China Institute of Atomic Energy, Beijing 102413, China
Institute of Applied Physics and Computational Mathematics, Beijing 100094, China
Department of Physics, Nankai University, Tianjin 300071, China
We achieved a set of 9Be global phenomenological optical model potentials by fitting a large experimental dataset of the elastic scattering observable for target mass numbers from 24 to 209. The obtained 9Be global optical model potential was applied to predict elastic-scattering angular distributions and total reaction cross-sections of 8,10,11B projectiles. The predictions are made by performing a detailed analysis comparing with the available experimental data. Furthermore, these elastic scattering observables are also predicted for some lighter targets outside of the given mass number range, and reasonable results are obtained. Possible physical explanations for the observed differences are also discussed.
optical model potential ,
elastic-scattering angular distribution ,
total reaction cross section
[1] A. J. Koning and J. P. Delaroche, Nucl. Phys. A, 713: 231 (2003)
[2] L. R. Gasques, A. S. Freitas, and L. C. Chamon et al, Phys. Rev. C, 97: 034629 (2018) doi: 10.1103/PhysRevC.97.034629
[3] Yongli Xu, Yinlu Han, Jiaqi Hu et al, Phys. Rev. C, 97: 014615 (2018) doi: 10.1103/PhysRevC.97.014615
[4] Yongli Xu, Yinlu Han, Haiying Liang et al, Phys. Rev. C, 99: 034618 (2019) doi: 10.1103/PhysRevC.99.034618
[5] L. V. Grigorenko, B. V. Danilin, V. D. Efros et al, Phys. Rev. C, 57: 2099(R) (1998) doi: 10.1103/PhysRevC.57.R2099
[6] V. Morcelle, R. Lichtentäler, A. Lépine-Szily et al, Phys. Rev. C, 95: 014615 (2017) doi: 10.1103/PhysRevC.95.014615
[7] J. J. Kolata, V. Guimarães, and E. F. Aguilera, Eur. Phys. J. A, 52: 123 (2016) doi: 10.1140/epja/i2016-16123-1
[8] E. F. Aguilera, E. Martinez-Quiroz, D. Lizcano et al, Phys. Rev. C, 79: 021601(R) (2009)
[9] M. Mazzocco, N. Keeley, A. Boiano et al, Phys. Rev. C, 100: 024602 (2019) doi: 10.1103/PhysRevC.100.024602
[10] Y. Y. Yang, J. S. Wang, Q. Wang et al, Phys. Rev. C, 87: 044613 (2013) doi: 10.1103/PhysRevC.87.044613
[11] Y. Y. Yang, X. Liu, D. Y. Pang et al, Phys. Rev. C, 98: 044608 (2018) doi: 10.1103/PhysRevC.98.044608
[12] A. Barioni, J. C. Zamora, V. Guimaraes et al, Phys. Rev. C, 84: 014603 (2011) doi: 10.1103/PhysRevC.84.014603
[13] G. Tabacaru, A. Azhari, J. Brinkley et al, Phys. Rev. C, 73: 025808 (2006) doi: 10.1103/PhysRevC.73.025808
[14] I. V. Kuznetsov, M. I. Ivanov, R. Kalpakchieva et al, IZV, 63: 992 (1999)
[15] C. Borcea, F. Carstoiu, F. Negoita et al, Nucl. Phys. A, 616: 231 (1997) doi: 10.1016/S0375-9474(97)00093-6
[16] Li Jiaxing, Xiao Guoqing, Guo Zhongyan et al, High Energy Physics and Nuclear Physics, 28: 1256 (2004)
[17] Xinwu Su, Yinlu Han, Haiying Liang, Zhendong Wu, Hairui Guo, and Chonghai Cai, Phys. Rev. C, 95: 054606 (2017) doi: 10.1103/PhysRevC.95.054606
[18] L. A. Parks, K. W. Kemper, R. I. Cutler et al, Phys. Rev. C, 19: 2206 (1979)
[19] V. Scarduelli, E. Crema, V. Guiarães et al, Phys. Rev. C, 96: 054610 (2017) doi: 10.1103/PhysRevC.96.054610
[20] M. A. G. Alvarez, M. Rodríguez-Gallardo, and L. R. Gasques et al, Phys. Rev. C, 98: 024621 (2018) doi: 10.1103/PhysRevC.98.024621
[21] C. W. Glover, K. W. Kemper, L. A. Parks et al, Nucl. Phys. A, 337: 520 (1980) doi: 10.1016/0375-9474(80)90157-8
[22] T. Motobayashi, I. Kohno, T. Ooi et al, Nucl. Phys. A, 331: 193 (1979) doi: 10.1016/0375-9474(79)90309-9
[24] K. Bodek, M. Hugi, J. Lang et al, Phys. Lett. B, 92: 79 (1980) doi: 10.1016/0370-2693(80)90308-1
[25] A. M. Mukhamedzhanov, H. L. Clark, H. Dejbakhsh et al, Nucl. Phys. A, 631: 788 (1998) doi: 10.1016/S0375-9474(98)00110-9
[26] R. E. Warner, F. Carstoiu, J. A. Brown et al, Phys. Rev. C, 74: 014605 (2006) doi: 10.1103/PhysRevC.74.014605
[27] Jiaxing Li, Zhongyan Guo, Guoqing Xiao et al, High Energy Physics and Nuclear Physics, 26: 683 (2002)
[28] N. N. Deshmukh, V. Guimaraes, E. Crema et al, Phys. Rev. C, 92: 054615 (2015) doi: 10.1103/PhysRevC.92.054615
[29] A. Nurmela, P. Pusa, E. Rauhala et al, Nucl. Instrum. Methods B, 161: 130 (2000)
[30] P. K. Sahu, A. Saxena, R. K. Choudhury et al, Phys. Rev. C, 68: 054612 (2003) doi: 10.1103/PhysRevC.68.054612
[31] A. Shrivastava, S. Kailas, P. Singh et al, Nucl. Phys. A, 635: 411 (1998) doi: 10.1016/S0375-9474(98)00173-0
[32] L. Jarczyk, B. Kamys, A. Strzalowski et al, Phys. Rev. C, 31: 12 (1985) doi: 10.1103/PhysRevC.31.12
[33] S. Yu. Mezhevych, K. Rusek, A. T. Rudchik et al, Nucl. Phys. A, 724: 29 (2003) doi: 10.1016/S0375-9474(03)01478-7
[34] B. Guo, Z. H. Li, M. Lugaro et al, Astrophysical Journal, 756: 193 (2012) doi: 10.1088/0004-637X/756/2/193
[1] Yong-Li Xu , Yin-Lu Han , Xin-Wu Su , Xiao-Jun Sun , Hai-Ying Liang , Hai-Rui Guo , Chong-Hai Cai . Global optical model potential describing 12C-nucleus elastic scattering. Chinese Physics C, 2020, 44(12): 124103. doi: 10.1088/1674-1137/abb4d0
[2] Lei Yang , Cheng-Jian Lin , Hui-Ming Jia , Xin-Xing Xu , Nan-Ru Ma , Li-Jie Sun , Feng Yang , Huan-Qiao Zhang , Zu-Hua Liu , Dong-Xi Wang . Test of the notch technique for determining the radial sensitivity of the optical model potential. Chinese Physics C, 2016, 40(5): 056201. doi: 10.1088/1674-1137/40/5/056201
[3] Marzhan Nassurlla , N. Burtebayev , T. Kh. Sadykov , I. Boztosun , N. Amangeldi , D. Alimov , Zh. Kerimkulov , J. Burtebayeva , Maulen Nassurlla , A. Kurakhmedov , S.B. Sakuta , Mesut Karakoc , Awad A. Ibraheem , K.W. Kemper , Sh. Hamada . New measurements and reanalysis of 14N elastic scattering on 10B target. Chinese Physics C, 2020, 44(10): 104103. doi: 10.1088/1674-1137/abab89
[4] YANG Lin-Meng , GUO Wen-Jun , ZHANG Fan , NI Sheng . Study of the total reaction cross section via QMD. Chinese Physics C, 2013, 37(10): 104101. doi: 10.1088/1674-1137/37/10/104101
[5] Li Ou , Xue-ying He . In-medium nucleon-nucleon elastic cross-sections determined from the nucleon induced reaction cross-section data. Chinese Physics C, 2019, 43(4): 044103. doi: 10.1088/1674-1137/43/4/044103
[6] LI Jia-Xing , LIU Ping-Ping , WANG Jian-Song , HU Zheng-Guo , MAO Rui-Shi , SUN Zhi-Yu , LI Chen , CHEN Ruo-Fu , XU Hu-Shan , XIAO Guo-Qing , GUO Zhong-Yan . Measurement of the total reaction cross section for the mirror nuclei 12N and 12B. Chinese Physics C, 2010, 34(4): 452-455. doi: 10.1088/1674-1137/34/4/006
[7] LIU Xin , WANG You-Bao , LI Zhi-Hong , JIN Sun-Jun , WANG Bao-Xiang , LI Yun-Ju , LI Er-Tao , BAI Xi-Xiang , GUO Bing , SU Jun , ZENG Sheng , YAN Sheng-Quan , LIAN Gang , HUANG Wu-Zhen , LIU Wei-Ping . Angular distribution of 6He+p elastic scattering. Chinese Physics C, 2012, 36(8): 716-720. doi: 10.1088/1674-1137/36/8/006
[8] Ma Zhongyu , Gu Yingqi , Zhu ping , Zhuo Yizhong . Systematical Analyses of Proton-Nuclei Scattering with the Relativistic Microscopic Optical Potential. Chinese Physics C, 1988, 12(S3): 281-292.
[9] Zou Bingsong , Jiang Huanqing . Second Order Pion-Nucleus Optical Potential and the Double Charge Exchange Reaction. Chinese Physics C, 1988, 12(S3): 271-280.
[10] YE Wei , WU Feng . Influence of angular momentum on evaporation residue cross section as a probe of nuclear dissipation. Chinese Physics C, 2008, 32(6): 433-436. doi: 10.1088/1674-1137/32/6/004
[11] YANG Hong-Wei , YE Wei . Spin distribution of evaporation residue cross section within a stochastic approach. Chinese Physics C, 2008, 32(3): 182-185. doi: 10.1088/1674-1137/32/3/004
[12] Yu Youwen , Shen Pengnian , Zhang Zongye . Equivalent Meson Exchange Potential on Quark Model and N-N Scattering. Chinese Physics C, 1990, 14(S4): 377-380.
[13] ZHOU Feng-Qun , SONG Yue-Li , TUO Fei , KONG Xiang-Zhong . Analysis of the total activation cross section of all possible reactions producing the same radioactive nuclide for the same element. Chinese Physics C, 2011, 35(1): 31-34. doi: 10.1088/1674-1137/35/1/007
[14] ZHONG Qi-Ping , ZHOU Zu-Ying , TANG Hong-Qing , CHEN Xiao-Liang , HE Guo-Zhu , ZHANG Qi-Wei , GUO Wei-Xin , YUAN Ji-Long , MA Xiao-Yun , WANG Qiang , RUAN Xi-Chao , LI Zhi-Hong . New detector system to measure (n,γ) reaction cross section precisely in China. Chinese Physics C, 2008, 32(S2): 102-105.
[15] Cui-Hua Rong , Gao-Long Zhang , Lin Gan , Zhi-Hong Li , L. C. Brandão , E. N. Cardozo , M. R. Cortes , Yun-Ju Li , Jun Su , Sheng-Quan Yan , Sheng Zeng , Gang Lian , Bing Guo , You-Bao Wang , Wei-Ping Liu , J. Lubian . The angular distributions of elastic scattering of 12,13C+Zr. Chinese Physics C, 2020, 44(10): 104003. doi: 10.1088/1674-1137/abab8d
[16] Awad A. Ibraheem . Folding model calculations for 6He+12C elastic scattering. Chinese Physics C, 2016, 40(3): 034102. doi: 10.1088/1674-1137/40/3/034102
[17] Yu. A. Berezhnoy , V. P. Mikhailyuk . Polarization of protons in the optical model. Chinese Physics C, 2017, 41(2): 024102. doi: 10.1088/1674-1137/41/2/024102
[18] ZHANG Jun-Jie , HU Bi-Tao . Cross section of reaction 181Ta(p,nγ)181W and the influence of the spin cut-off parameter on the cross section. Chinese Physics C, 2011, 35(12): 1100-1104. doi: 10.1088/1674-1137/35/12/004
[19] Zhu Ping , Xu Jiabao , Gao Qin . Effect of Interaction Range in Relativistic Microscopic Optical Potential. Chinese Physics C, 1993, 17(S2): 179-189.
[20] PU Zhong-Sheng , FAN Kang-Kang , SONG Guan-Qiang . Cross section measurements for (n, p) reaction on stannum isotopes at neutron energies from 13.5 to 14.6 MeV. Chinese Physics C, 2011, 35(5): 445-448. doi: 10.1088/1674-1137/35/5/007
Figures(20) / Tables(1)
Yong-Li Xu, Yin-Lu Han, Hai-Ying Liang, Zhen-Dong Wu, Hai-Rui Guo and Chong-Hai Cai. Applicability of 9Be global optical potential to description of 8,10,11B elastic scattering[J]. Chinese Physics C, 2020, 44(3): 034101. doi: 10.1088/1674-1137/44/3/034101
Yong-Li Xu 1,,
Yin-Lu Han 2,,
Hai-Ying Liang 2,
Zhen-Dong Wu 2,
Hai-Rui Guo 3,
Chong-Hai Cai 4,
Corresponding author: Yong-Li Xu, [email protected]
Corresponding author: Yin-Lu Han, [email protected]
1. College of Physics and Electronic Science, Shanxi Datong University, Datong 037009, China
2. Key Laboratory of Nuclear Data, China Institute of Atomic Energy, Beijing 102413, China
3. Institute of Applied Physics and Computational Mathematics, Beijing 100094, China
4. Department of Physics, Nankai University, Tianjin 300071, China
Abstract: We achieved a set of 9Be global phenomenological optical model potentials by fitting a large experimental dataset of the elastic scattering observable for target mass numbers from 24 to 209. The obtained 9Be global optical model potential was applied to predict elastic-scattering angular distributions and total reaction cross-sections of 8,10,11B projectiles. The predictions are made by performing a detailed analysis comparing with the available experimental data. Furthermore, these elastic scattering observables are also predicted for some lighter targets outside of the given mass number range, and reasonable results are obtained. Possible physical explanations for the observed differences are also discussed.
In the last few years, reactions involving 8,10,11B isotopes have increasingly been attracting intense experimental and theoretical attention. The optical model potential (OMP) plays an important role in the investigation of these reactions. Theoretical studies have already been performed on this subject using both phenomenological and microscopic approaches. In this study, the phenomenological OMP are discussed with the aim to describe elastic scattering. Since the global phenomenological OMP is achieved by fitting large quantities of experimental data in a certain range of energy and mass, the basical elastic scattering observables can be reliably predicted using it in the region where no experimental measurement data exist [1]. Thus far, the experimental data of elastic scattering involving 8,10,11B projectiles are relatively scarce, because the radioactive beams are not produced at sufficiently high intensities [2]. Therefore, it is difficult to achieve reliable global OMP on the basis of existing experimental data.
In our previous work, the elastic scattering observable for 8,10,11B isotopes has been predicted using the global phenomenological OMP of the 7Li projectile. Reasonable agreement is obtained between the predictions and corresponding experimental data for 8,10B. However, the global OMP of 7Li cannot provide a good description for backward-angle area for 11B, and the radius parameter of the real part potential was adjusted to improve the fit for 11B on the basis of 7Li global OMP [3]. Recently, the global phenomenological OMP of 9Be was achieved by simultaneously fitting the experimental data of elastic-scattering angular distributions and total reaction cross-sections below 200 MeV in the range of target mass from 24 to 209 [4]. Moreover, the stable weakly bound projectile 9Be is adjacent to the 8,10,11B isotopes. Within this context, we intend to apply the obtained global phenomenological OMPs of 9Be to perform a systematic study involving the elastic scattering of 8,10,11B isotopes impinging on different targets, which can further study the nuclear reaction and structure properties for 8,10,11B projectiles.
This paper is constructed as follows. In Sec. 2, the phenomenological OMP formula and methods used in this work are described for the elastic scattering of 8,10,11B projectiles. The elastic scattering observables describing the reactions induced by 8,10,11B are predicted using the global OMP of 9Be, which are further discussed by comparison with the existing experimental data. Finally, the main conclusions of this work are summarized in Sec. 3.
2. Optical model calculations and discussion
As previously outlined [4], the optical model potential of the Woods-Saxon type,
$\begin{split} V(r,E) =& V_{R}(E)f(r,R_{R},a_{R})+{\rm i}W_{V}(E)f(r,R_{V},a_{V}) \\& +{\rm i}(-4W_{S}(E)a_{S})\frac{\rm d}{{\rm d}r}f(r,R_{S},a_{S}), \end{split}$
and Coulomb potential of a uniform charged sphere with radius $ R_{C} $ were used in OM calculations. $ V_{R}(E) $, $ W_{S}(E) $, and $ W_{V}(E) $ are the energy-dependent potential depths, and they are respectively expressed as
$ V_{R}(E) = V_{0}+V_{1}E+V_{2}E^{2}, $
$ W_{S}(E) = {\rm max}\{0,W_{0}+W_{1}E\}, $
$ W_{V}(E) = {\rm max}\{0,U_{0}+U_{1}E\}. $
The radial functions are given by
$ f(r,R_{i},a_{i}) = {1+\exp[(r-R_{i})/a_{i}]}^{-1}, $
$ R_{i} = r_{i}A^{\frac{1}{3}}, \; \; \; \; \; \; i = R, S, V, C, $
where $ A $ depicts the target mass number. $ r_{R} $, $ r_{S} $, $ r_{V} $, and $ r_{C} $ are the radius parameters of real, surface, volume imaginary, and Coulomb potentials, respectively. $ a_{R} $, $ a_{S} $, and $ a_{V} $ are the corresponding diffuseness parameters. The radius parameters of the real potential is expressed by
$ r_{R} = r_{R_{0}}+r_{R_{1}}A^{\frac{1}{3}}. $
We achieved a set of 9Be global OMP parameters on the basis of experimental data of elastic-scattering angular distributions and total reaction cross-sections in the mass number range from 24 to 209 below 100 MeV [4]. The parameters of global OMP are listed in Table 1.
parameter value unit
$ V_{0} $ 268.0671 MeV
$ V_{1} $ −0.180
$ V_{2} $ −0.0009
$ W_{0} $ 52.149 MeV
$ W_{1} $ −0.125
$ U_{0} $ 2.965 MeV
$ U_{1} $ 0.286
$ r_{R_{0}} $ 1.200 fm
$ r_{R_{1}} $ 0.0273 fm
$ r_{S} $ 1.200 fm
$ r_{V} $ 1.640 fm
$ r_{C} $ 1.556 fm
$ a_{R} $ 0.726 fm
$ a_{S} $ 0.843 fm
$ a_{V} $ 0.600 fm
Table 1. Global phenomenological OMP parameters for 9Be.
In what follows, we apply the obtained 9Be global OMP to predict elastic scattering observables for 8,10,11B projectiles and compared this with the available experimental data.
2.1. Elastic scattering of 8B
The radioactive nucleus 8B is a lighter nucleus far from the β-stability valley, which is widely discussed as a candidate for a first proton drip line nucleus with a proton halo [5]. The proton separation energy is only 137 keV. Thus far, there were various reports on 8B in the literature studying its properties and the respective influences on different reaction mechanisms, because of the relevance of 8B in astrophysics, nuclear structure, and reaction theories [6]. However, the elastic scattering data with this projectile remain scarce because of the extreme difficulty to obtain reasonably intense beams [7]. To date, the elastic-scattering angular distributions and the total reaction cross-sections of 8B have been measured for 12C, 27Al, 28Si, 58Ni, and 208Pb targets [6, 8-16]. These observables are predicted using obtained global OMP of 9Be and compared with those predicted using global OMP of 7Li [3]. Further, since 8Li is the mirror nucleus of 8B, they are also predicted using our global OMP of 8Li [17]. These predictions are compared with existing experimental data.
Figure 1 presents the comparison of elastic-scattering angular distributions with the experimental data [6] for the 8B + 27Al system at incident energies of 15.3 MeV and 21.7 MeV. The figure shows that all of these global OMPs can reasonably predict the corresponding experimental data, and these predictions are relatively closed. The elastic-scattering angular distributions for the 8B + 58Ni system are predicted and compared with the experimental data [8] from 20.7 to 29.3 MeV, which is shown in Fig. 2. Although there is some divergence among these results predicted using the different global OMPs, all of them can reasonably generate the experimental data within the error range.
Figure 1. Comparisons of 8B elastic-scattering angular distributions calculated using 9Be, 7Li and 8Li global OMPs with corresponding experimental data for 27Al.
Figure 2. Same as Fig. 1, but for 58Ni.
For the 8B + 208Pb system, elastic-scattering angular distributions are also measured at incident energies of 50.0, 170.3, and 178.0 MeV [9-11]. They are further predicted using different global OMPs. The results are in good agreement with these existing experimental data [10, 11] at 170.3 MeV and 178.0 MeV. As for the incident energy of 50.0 MeV, the global OMP of 8Li can provide a more satisfactory description of the experimental data [9] compared with the global OMPs of 9Be and 7Li at backward angles. These results are shown in Fig. 3.
Figure 3. Same as Fig. 1, but for 208Pb.
Further, elastic-scattering angular distributions are also predicted using different global OMPs for the lighter target 12C. Comparing with experimental data [12, 13], these predictions seem inaccurate, as there is some divergence at extreme values. The result is shown in Fig. 4. Since the reactions of the lighter targets (A < 24) induced by different weakly bound nuclei 9Be, 7Li, and 8Li were not included in the process of adjusting the global OMP parameters, they should be studied using the local OMP.
Figure 4. Same as Fig. 1, but for 12C.
To date, the total reaction cross-sections had been only measured for the 8B + 28Si system [14–16], and most of them are above 200 MeV. The comparison between the predictions and data from different experiments below 250 MeV are shown in Fig. 5, which exhibits some divergences between them.
Figure 5. Comparisons of 8B total reaction cross-sections calculated using 9Be, 7Li and 8Li global OMPs with corresponding experimental data for 28Si.
From the above comparisons, the theoretical results predicted using the global OMPs of 9Be and 7Li can provide a reasonable description of the reactions induced 8B within the allowed error range, although it seems that the predictions of global OMP 8Li are more consistent with the few existing experimental data.
2.2. Elastic scattering of 10B
In the case of the 10B projectile, the elastic-scattering angular distributions and total reaction cross-sections are predicted using the global OMPs of 9Be and 7Li.
Figures 6 and 7 present the comparisons of elastic-scattering angular distributions between theoretical predictions and experimental data [18] for 27Al and 28,30Si targets at the bombarding energies from 33.7 MeV to 50 MeV. Figure 6 shows slight oscillations in the angular distributions appearing in the angular range from 50° to 80°, while agreement between the predictions of 9Be global OMP and experimental data is rather good.
Figure 6. Comparisons of 10B elastic-scattering angular distributions calculated using 9Be and 7Li global OMPs with corresponding experimental data for 27Al.
Figure 7. Same as Fig. 6, but for 28,30Si.
For the target 58Ni, the angular distributions are predicted using global OMPs of 9Be and 7Li at incident energies from 19.0 MeV to 35.0 MeV. In comparison with the experimental data [19], the predictions provide a good description of these data, which is shown in Fig. 8. The elastic-scattering angular distributions for 10B on 120Sn were measured at the bombarding energies of 31.5, 33.5, 35.0, and 37.5 MeV [20]. Global OMPs of the 9Be and 7Li were used to describe the experimental data. The 9Be global OMP provides a better description of experimental data. The result is displayed in Fig. 9.
Figure 9. Same as Fig. 6, but for 120Sn.
Figure 10 presents the theoretical results of angular distributions along with experimental measurements [21-23] for different targets. The theoretical results predicted using global OMPs of 9Be and 7Li give a satisfactory description for 40Ca and 208Pb. There are some discrepancies between them for lighter targets 16O and 20Ne, while the results of 9Be global OMP are more consistent with the corresponding measurements.
Figure 10. Same as Fig. 6, but for 16O, 20Ne, 40Ca, and 208Pb.
For the other lighter targets, the elastic-scattering angular distributions are also measured by different experiments. These reactions are further predicted using different global OMPs. Figure 11 presents the comparisons between them for 9Be. The discrepancy observed in Fig. 11 between theory and experiment [24, 25] indicates that the addition of coupled channels effects is needed in the backward-angle area for some lighter targets.
Figure 11. Same as Fig. 6, but for 9Be.
For the total reaction cross-sections of 10B, there are only the experimental data for nat.Si. We compare the predicted results of total reaction cross-sections to the experimental data [16, 26, 27] for 28Si. The result is shown in Fig. 12. The results of the 9Be global OMP are completely in agreement with all of the experimental data within the error range. For 208Pb, the data of total reaction cross-sections [23] was derived in terms of the optical model by analyzing the elastic scattering data at different incident energies. The predictions of the global OMP of 9Be and 7Li are also compared with the data, which is shown in Fig. 13. All of them are in good agreement.
Figure 12. Comparison of 10B total reaction cross-sections calculated using 9Be and 7Li global OMPs with corresponding experimental data for 28Si.
Figure 13. Same as Fig. 12, but for 208Pb.
For the 11B projectile, we obtained the global OMP [3] by adjusting the radius parameters of the real part potential on the basis of global OMP of 7Li. Although the predictions gave a reasonable description of the 11B elastic scattering for most of targets, the OMP cannot provide a satisfactory agreement with the experimental data in the backward-angle area for a few targets. In this section, the elastic scattering observables are predicted using the global OMPs of 9Be and 11B. The predictions are further compared with the existing experimental data.
Figure 14 presents the comparisons of elastic-scattering angular distributions between theoretical predictions and experimental data [18] for 28,30Si targets at the bombarding energies from 33.7 MeV to 50 MeV. The fits are generally reasonable with no apparent systematic nor major discrepancy of the data. However, the results predicted using the global OMP of 9Be are more consistent with the experimental data.
Figure 14. Comparisons of 11B elastic-scattering angular distributions calculated using 9Be and 11B global OMPs with corresponding experimental data for 28,30Si.
The elastic-scattering angular distributions for 58Ni are measured at incident energies from 19.0 MeV to 35.0 MeV [28]. The comparison between the predictions and experimental data is shown in Fig. 15. The figure shows that the predictions using global OMPs of 9Be and 11B are almost consistent and in good agreement with the data, except for 35.0 MeV, where the prediction using the global OMP 11B is more consistent with the experimental data. Moreover, the elastic angular distributions for 58Ni are measured at the same incident angle with different incident energies [29]. Comparisons of the predictions of elastic-scattering angular distributions from the global OMP of 9Be and 11B show that all of them are identical and can reproduce the data, which is shown in Fig. 16.
Figure 15. Same as Fig. 14, but for 58Ni.
Figure 16. Calculated elastic-scattering angular distributions in Rutherford ratio at same incident angles compared with experimental data for 58Ni target.
For 40Ca and 208Pb, the elastic angular distributions are measured at incident energies of 51.5 MeV and 69.0 MeV [21, 30]. Figure 17 presents the comparisons between the predictions and experimental data. The predictions of 9Be and 11B are in good agreement with the experimental data. For 209Bi, the angular distributions for elastic scattering calculated using the 11B global OMP were larger than the experimental data [30, 31] at backward angles. Hence, the radius of the real part for the 11B global phenomenological OMP was added by 0.15 to improve the fit with the data [3]. Figure 18 presents the theoretical results predicted using the global OMPs of 9Be and 11B together with the experimental data. From the figure, one can see that the calculations of 9Be global OMP are in excellent agreement with the experimental data at all energies. Compared to those of the global OMP 11B, the global OMP of 9Be can give a better prediction for the 11B + 209Bi reaction. One of the reasons may be that the 11B global OMP was obtained by only adjusting the radius parameters of the real part potential on the basis of the global OMP of 7Li [3], since the existing experimental data on elastic scattering is scarce for the reactions induced by the 11B projectile. Moreover, this may be the influence of target shell effects for the doubly closed shell 208Pb nucleus and one proton outside the closed shell 209Bi nucleus. Meanwhile, the radius parameter of the real potential of the global OMP 9Be was further defined by $ r_{R_{0}}+r_{R_{1}}A^{\frac{1}{3}} $ as compared to $ r_{R_{0}} $ of the global OMP 11B, which may compensate the influence of target shell effects.
Figure 17. Same as Fig. 14, but for 40Ca and 208Pb.
Figure 18. Same as Fig. 14, but for 209Bi.
Similarly, the elastic-scattering angular distributions for some lighter targets are predicted using the global OMPs of 9Be and 11B. Figure 19 presents the comparisons for 12,13C. The discrepancy observed in Fig. 19 between theory and experiment [32-34] shows that it possibly requires the addition of coupled channels effects in the backward-angle area for some lighter targets.
Figure 19. Same as Fig. 14, but for 12,13C.
There are no measurements of the total reaction cross-sections for the reactions induced by 11B. The existing data of total reaction cross-sections for 209Bi were extracted from the experimental elastic scattering data [31]. The predictions of total reaction cross-sections are further compared with the data for 209Bi, which is shown in Fig. 20. The predictions are in satisfactory agreement with the data extracted from the measured elastic-scattering angular distributions.
Figure 20. Comparisons of 11B total reaction cross-sections calculated using 9Be and 11B global OMPs with corresponding data for 209Bi.
We predicted elastic scattering observables involving 8,10,11B projectiles by applying the obtained global OMP of 9Be. We compared these predictions with those of the other global OMPs, and investigated and analyzed them in detail. The theoretical results predicted using the global OMP of 9Be give a more satisfactory description of the elastic scattering for 8,10,11B projectiles for targets from 27Al to 209Bi. For the lighter targets, we made a tentative prediction. All of the results predicted using different global OMPs are not in good agreement with the experimental data in the backward-angle area for lighter targets. The reason is that the other reaction mechanisms have to be considered in the calculations for such light systems, such as transfer or breakup. The present work shows that the global OMP of 9Be is useful to systematically investigate the reactions involving 8,10,11B projectiles. | CommonCrawl |
\begin{definition}[Definition:Linear Differential Operator]
A '''linear differential operator''' is a differential operator $\mathscr L$ with the property that:
:$\map {\mathscr L} {\alpha \phi_1 + \beta \phi_2} = \alpha \map {\mathscr L} {\phi_1} + \beta \map {\mathscr L} {\phi_2}$
Thus if $\phi_1$ and $\phi_2$ are solutions to the differential equation $\map {\mathscr L} {\phi_i} = 0$, then so is any linear combination of $\phi_1$ and $\phi_2$.
\end{definition} | ProofWiki |
Why are trig functions defined for the unit circle?
Why did we ever need to define the trig functions of angles greater than 90 degrees or less than 0 degrees? What is the use of applying trig functions to such angles?
If we apply the trig functions on a regular right triangle, it makes sense. We can get the ratio of two sides and find out an unknown side if there is a known side (and the other way around).
Let's say that I have a right triangle in which an angle x is 30 degrees and the hypotenuse is 20 cm. I have to find the length of side AY, which is opposite to angle x . Well I can use the function sin(30 degrees), which comes out to be 1/2 . Now 1/2 = AY / 20 . And after solving it we get AY = 10.
Or let's say that I have a right triangle in which I have to find an angle x. The side opposite to x is 10cm and the hypotenuse is 20 cm. Then 10/20 = 1/2. What is the arcsin of 1/2? 30 degrees. Angle x is 30 degrees.
But what use is it to take the sine of an angle 120 degrees of an obtuse triangle? We are not getting a ratio of the sides or anything if we apply it to a non-right triangle.
trigonometry triangles
discussedtreediscussedtree
$\begingroup$ Note that for example the sine and cosine theorems make sense also for obtuse triangles. $\endgroup$ – Hagen von Eitzen May 7 '15 at 17:03
$\begingroup$ It turns out that if we extend the domain of the trig functions to the entire real line, there are many, many applications for which they are useful. Most of the those applications have nothing to do with triangles or geometry explicitly. (And even if this were not the case, we mathematicians in general like to generalize things and see what happens. In this case, the generalization is extremely useful.) $\endgroup$ – Simon S May 7 '15 at 17:04
$\begingroup$ The theory of trig functions extends far beyond just triangles, you will come to find (should you continue with maths) that they have a lot of uses with mathematics $\endgroup$ – Rammus May 7 '15 at 17:04
$\begingroup$ Find area of a triangle whose one angle is $\angle A=2\pi /3$ and two sides are $b=4$ $c=5$ , Angles are named according opposite to side. $\endgroup$ – Mann May 7 '15 at 17:05
$\begingroup$ @HagenvonEitzen Note that for example the sine and cosine theorems make sense also for obtuse triangles Sine is the ratio of the opposite side and the hypotenuse. Since when did obtuse triangles get a hypotenuse? The thing is, given an angle, there can be no definite ratio of two sides of a non-right triangle, this is because the other two angles can be anything, different possible angles means different possible lengths of sides. $\endgroup$ – discussedtree May 7 '15 at 17:26
If you think of the graph of $\sin(x)$ it's a nice periodic function, the graph is a wave. It's very useful in physics (for example) to have functions that model wave behavior. For this you need to allow the angle to go for multiple cycles. Without such functions things like Fourier analysis would be impossible.
Gregory GrantGregory Grant
Your question actually has a lot of answers.
Because we can! Having unrestricted $\sin$ and $\cos$ functions does no harm to people who only need the restricted versions, so why not?
As noted by other answers and comments, they do work fine for obtuse, or reversed, triangles, even if it is possible to manage without them. For a thought experiment, you should also note that by your definition, they are also "useless" outside the interval $[0, \pi/4]$ (I am using radians, so $\pi/4 = 45°$), since you can always use the formula $\sin (\pi/2-x) = \cos(x)$ to reduce to this case.
The trigonometric functions are not useful only for triangles. For example, imagine a point moving regularly in a circle. The position of the point at time $t$ is given by $(\cos (t), \sin(t))$. Using restricted-trigonometric functions, it would be something with lots of separate cases like $$\def\p{\frac{\pi}{2}}\begin{cases} (\cos t, \sin t)& \text{for $t \in [0, \p[$,}\\ (-\sin (t-\p), \cos(t-\p)) & \text{for $t \in [\p, \pi[$,}\\ (-\cos (t-\pi), -\sin (t-\pi)) & \text{for $t \in [\pi, 3\p[$,}\\ (\sin(t-3\p), -\cos(t-3\p)) & \text{for $t \in [3\p, 2\pi[$,}\\ \dots \end{cases}$$ which is ugly.
The trigonometric functions are not useful only for geometry. For example, we have the celebrated Euler formula, which is very very deep: $$ \exp (i\, t) = \cos (t) + i \sin (t). $$ Restricting this one to $[0, \pi/2[$ would be extremely weird.
Another analysis example: the trigonometric functions are solutions of differential equations such as $d^2\cos(t)/dt^2 + \cos(t) = 0$ (with initial condition $\cos(0) = 1, \cos'(0) = 0$). As such, they have a natural domain of definition, which is the domain in which the above solution exists, which is the full set $\mathbb R$.
Related to 4. (and 3.): these functions are also defined by way of power series, such as $$ \cos t = \sum_{n \geq 0} (-1)^n \frac{t^{2n}}{(2n)!}.$$ These sums are convergent for every $t \in \mathbb R$.
CirconflexeCirconflexe
What is the use of applying trig functions to such angles?
Who said that we should apply them only to angles ? For instance, the concept of number started historically with natural numbers $($positive integers$)$, meant for counting. Soon enough, it spread to rationals, which, for the most part, do not exactly "count" anything. Nevertheless, they do measure. So the $($initial$)$ concept was further enriched, by gaining a whole new meaning. The same applies here to trigonometry: although initially intended for purely geometric purposes, its use later extended to completely unrelated areas $($e.g., to signal processing, among countless others$)$. Hope this helps.
LucianLucian
Not the answer you're looking for? Browse other questions tagged trigonometry triangles or ask your own question.
When the trig functions moved from the right triangle to the unit circle?
unit circle trigonometry where angle is greater than 90 degrees.
Why does computing sine of an angle seem to recursively require sine?
What's the intuition behind the sin of an obtuse angle?
How is the hypotenuse the longest side of any right triangle?
Right triangle and Sine function?
Why are the power series for trig functions in radians?
Why are trigonometric functions defined so abruptly?
On the definition of trigonometric functions in the right triangle.
Trigonometric Ratios for angles greater than 90 degrees in unit circle
Nested trig functions (incl. inverse trig functions) | CommonCrawl |
⌂ → Galois groups → 27T45
Feedback · Hide Menu
Galois group: 27T45
Overview Random
Universe Knowledge
L-functions
Degree 1 Degree 2
$\zeta$ zeros
Modular forms
Classical Maass
Hilbert Bianchi
Elliptic curves over $\Q$
Elliptic curves over $\Q(\alpha)$
Genus 2 curves over $\Q$
Higher genus families
Abelian varieties over $\F_{q}$
Number fields
$p$-adic fields
Dirichlet characters
Artin representations
Galois groups
Sato-Tate groups
Label 27T45
Degree $27$
Order $162$
Cyclic no
Abelian no
Solvable yes
Primitive no
$p$-group no
Group: $(He_3.C_3):C_2$
Completeness of the data
Source of the data
Reliability of the data
Galois group labels
Group action invariants
Degree $n$: $27$
Transitive number $t$: $45$
Parity: $-1$
Primitive: no
Nilpotency class: $-1$ (not nilpotent)
$|\Aut(F/K)|$: $9$
Generators: (1,5,7,2,6,8,3,4,9)(10,21,16,27,13,23,12,20,18,26,15,22,11,19,17,25,14,24), (1,18,21,2,16,19,3,17,20)(4,10,22,5,11,23,6,12,24)(7,14,26,8,15,27,9,13,25)
Low degree resolvents
|G/N|
Galois groups for stem field(s)
$2$: $C_2$
$6$: $S_3$, $C_6$
$18$: $S_3\times C_3$
$54$: $C_3^2 : C_6$
Resolvents shown for degrees $\leq 47$
Degree 3: $C_3$, $S_3$
Degree 9: $S_3\times C_3$
Low degree siblings
Siblings are shown with degree $\leq 47$
A number field with this Galois group has no arithmetically equivalent fields.
Conjugacy classes
Cycle Type Size Order Representative
$ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 $ $1$ $1$ $()$
$ 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1 $ $6$ $3$ $(10,11,12)(13,14,15)(16,17,18)(19,21,20)(22,24,23)(25,27,26)$
$ 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1 $ $9$ $2$ $(10,25)(11,26)(12,27)(13,21)(14,19)(15,20)(16,24)(17,22)(18,23)$
$ 3, 3, 3, 3, 3, 3, 3, 3, 3 $ $1$ $3$ $( 1, 2, 3)( 4, 5, 6)( 7, 8, 9)(10,11,12)(13,14,15)(16,17,18)(19,20,21) (22,23,24)(25,26,27)$
$ 6, 6, 6, 3, 3, 3 $ $9$ $6$ $( 1, 2, 3)( 4, 5, 6)( 7, 8, 9)(10,25,12,27,11,26)(13,21,15,20,14,19) (16,24,18,23,17,22)$
$ 9, 9, 9 $ $3$ $9$ $( 1, 4, 8, 2, 5, 9, 3, 6, 7)(10,13,18,11,14,16,12,15,17)(19,23,25,20,24,26,21, 22,27)$
$ 18, 9 $ $9$ $18$ $( 1, 4, 8, 2, 5, 9, 3, 6, 7)(10,19,17,26,15,23,12,21,16,25,14,22,11,20,18,27, 13,24)$
$ 3, 3, 3, 3, 3, 3, 3, 3, 3 $ $18$ $3$ $( 1,10,25)( 2,11,26)( 3,12,27)( 4,15,20)( 5,13,21)( 6,14,19)( 7,18,23) ( 8,16,24)( 9,17,22)$
$ 9, 9, 9 $ $18$ $9$ $( 1,13,24, 3,15,23, 2,14,22)( 4,17,26, 6,16,25, 5,18,27)( 7,11,20, 9,10,19, 8, 12,21)$
Group invariants
Order: $162=2 \cdot 3^{4}$
Cyclic: no
Abelian: no
Solvable: yes
GAP id: [162, 12]
Character table: not available.
Data computed by GAP, Magma, J. Jones, and A. Bartel.
This project is supported by grants from the US National Science Foundation, the UK Engineering and Physical Sciences Research Council, and the Simons Foundation.
Contact · Citation · Acknowledgments · Editorial Board · Source · 37c0b9f1da9b753b99b40272ff23745b4067de7b · SageMath version 9.2 · LMFDB Release 1.2 | CommonCrawl |
\begin{document}
\title{Linear Time Runs over\General Ordered Alphabetsootnote{This work has been submitted to ICALP 2021.}
\begin{abstract} A run in a string is a maximal periodic substring. For example, the string \texttt{bananatree} contains the runs $\texttt{anana} = (\texttt{an})^{3/2}$ and $\texttt{ee} = \texttt{e}^2$. There are less than $n$ runs in any length-$n$ string, and computing all runs for a string over a linearly-sortable alphabet takes $\orderof{n}$ time (Bannai et al., SODA 2015). Kosolobov conjectured that there also exists a linear time runs algorithm for general ordered alphabets (Inf. Process. Lett. 2016). The conjecture was almost proven by Crochemore et al., who presented an $\orderof{n\alpha(n)}$ time algorithm (where $\alpha(n)$ is the extremely slowly growing inverse Ackermann function). We show how to achieve $\orderof{n}$ time by exploiting combinatorial properties of the Lyndon array, thus proving Kosolobov's conjecture. \end{abstract}
\section{Introduction and Related Work}
A run in a string $S$ is a maximal periodic substring. For example, the string $S = \examplestr{bananatree}$ contains exactly the runs $\examplestr{anana} = (\examplestr{an})^{3/2}$ and $\examplestr{ee} = \examplestr{e}^2$. Identifying such repetitive structures in strings is of great importance for applications like text compression, text indexing and computational biology (for a general overview see \cite{Crochemore2009}). To name just one example, runs in human genes (called maximal tandem repeats) are involved with a number of neurological disorders \cite{Budworth2013}. In 1999, Kolpakov and Kucherov showed that the maximum number $\rho(n)$ of runs in a length-$n$ string is bounded by $\orderof{n}$, and provided a word RAM algorithm that outputs all runs in linear time \cite{Kolpakov1999}. The algorithm is based on the Lempel-Ziv factorization and only achieves $\orderof{n}$ time for \emph{linearly-sortable alphabets}, i.e.\ alphabets that are totally ordered and for which a sequence of $\sigma$ alphabet symbols can be sorted in $\orderof{\sigma}$ time. Since then, it has been an open question whether there exists a linear time runs algorithm for \emph{general ordered alphabets}, i.e.\ totally ordered alphabets for which the order of any two symbols can be determined in constant time. Any such algorithm must not use the Lempel-Ziv factorization, since for general ordered alphabets of size $\sigma$ it cannot be constructed in $o(n \lg \sigma)$ time \cite{Kosolobov2015}.
Kolpakov and Kucherov also conjectured that the maximum number of runs is bounded by $\rho(n) < n$, which started a 15~year-long search for tighter upper bounds of $\rho(n)$. Rytter was the first to give an explicit constant with $\rho(n) < 5n$ \cite{Rytter2006}. After multiple incremental improvements of this bound (e.g.\ \cite{Puglisi2008,Crochemore2008,Crochemore2011}), Bannai et al.\ \cite{Bannai2017} finally proved the conjecture by showing $\rho(n)<n$ for arbitrary alphabets, which was subsequently even surpassed for binary texts \cite{Fischer2015}. (The current best bound for binary alphabets is $\rho(n)<\frac{183}{193}n$ \cite{Holub2017}.)
On the algorithmic side, Bannai et al.\ also provided a new linear time algorithm that computes all the runs \cite{Bannai2017}. While (just like the algorithm by Kolpakov and Kucherov) it only achieves the time bound for linearly-sortable alphabets, it no longer relies on the Lempel-Ziv factorization. Instead, the main effort of the algorithm lies in the computation of $\Theta(n)$ \emph{longest common extensions (LCEs)}; given two indices $i,j \in [1, n]$, their LCE is the length of the longest common prefix of the suffixes $S[i..n]$ and $S[j..n]$. For linearly-sortable alphabets, we can precompute a data structure in $\orderof{n}$ time that answers arbitrary LCE queries in constant time (see e.g.\ \cite{Fischer2006}), thus yielding a linear time runs algorithm. Kosolobov showed that for general ordered alphabets any batch of $\orderof{n}$ LCEs can be computed in $\orderof{n \lg^{2/3} n}$ time, and conjectured the existence of a linear time runs algorithm for general ordered alphabets \cite{Kosolobov2016}. Gawrychowski et al.\ improved this result to $\orderof{n \lg\lg n}$ time \cite{Gawrychowski2016}. Finally, Crochemore et al.\ noted that the required LCEs satisfy a special non-crossing property. They showed how to compute $\orderof{n}$ non-crossing LCEs in $\orderof{n\alpha(n)}$ time, resulting in an $\orderof{n\alpha(n)}$ time algorithm that computes all runs over general ordered alphabets \cite{Crochemore2016} (where $\alpha$ is the inverse Ackermann function).
\subparagraph*{Our Contributions.} We show how to compute the required LCEs in $\orderof{n}$ time and space, resulting in the first linear time runs algorithm for general ordered alphabets, and thus proving Kosolobov's conjecture. Our solution differs from all previous approaches in the sense that it cannot answer a sequence of \emph{arbitrary} non-crossing LCE queries. Instead, our algorithm is specifically designed \emph{exactly for the LCEs required by the runs algorithm}. This allows us to utilize powerful combinatorial properties of the \emph{Lyndon array} (a definition follows in \cref{sec:prelim}) that do not generally hold for arbitrary non-crossing LCE sequences.
Even though the main contribution of our work is the improved asymptotic time bound, it is worth mentioning that our algorithm is also very fast in practice. On modern hardware, computing all runs for a text of length $10^7 (=10\text{MB})$ takes only one second.
\subparagraph*{A Note on the Model.} As mentioned earlier, our algorithm runs in linear time for \emph{general ordered alphabets}, whereas previous algorithms achieve this time bound only when the alphabet is \textit{linearly-sortable}. This is comparable with the distinction between \textit{comparison-based} and \textit{integer} sorting: while in the comparison-model sorting $n$ items requires $\Omega(n\lg n)$ time, integer sorting is faster ($O(n\sqrt{\lg\lg n})$ time \cite{Han2002} and sometimes even linear, e.g.\ when the word width $w$ satisfies $w = \orderof{\lg n}$ and one can use radix sort, or when $w \geq (\lg^{2 + \epsilon} n)$ \cite{Andersson1998}). Whereas it is a major open problem whether integer sorting can always be done in linear time, this paper settles a symmetric open problem for the computation of runs.
The remainder of the paper is structured as follows: First, we introduce the basic notation, definitions, and auxiliary lemmas (\cref{sec:prelim}). Then, we give a simplified description of the runs algorithm by Bannai et al.\ and show how the required LCEs relate to the Lyndon array (\cref{sec:bannairevisit}). Our linear time algorithm to compute the LCEs is described in \cref{sec:lcealgo}. We discuss additional practical aspects and experimental results in \cref{sec:practical}.
\section{Preliminaries} \label{sec:prelim}
Our analysis is performed in the word RAM model (see e.g.\ \cite{Hagerup1998}), where we can perform fundamental operations (logical shifts, basic arithmetic operations etc.) on words of size $w$ bits in constant time. For an input of size $n$ we assume $\ceil{\log_2 n} \leq w$. We write $[i,j] = [i, j + 1) = (i - 1, j] = (i-1, j+1)$ with $i,j \in \mathbb{N}$ to denote the set of integers ${\{x \mid x \in \mathbb{N} \land i \leq x \leq j \}}$.
\subparagraph*{Strings.} Let $\Sigma$ be a finite and totally ordered set. A \emph{string} $S$ of length $\absolute{S} = n$ over the \emph{alphabet} $\Sigma$ is a sequence $S[1]\dotsS[n]$ of $n$ \emph{symbols} (also called \emph{characters}) from $\Sigma$. The alphabet is called a \emph{general ordered alphabet} if order testing (i.e.\ evaluating $\sigma_1 < \sigma_2$ for $\sigma_1, \sigma_2 \in \Sigma$) is possible in constant time. For $i, j \in [1, n]$, we use the interval notation ${S[i..j]} = {S[i..j+1)} = {S(i - 1..j]} = {S(i - 1..j+1)}$ to denote the \emph{substring} $S[i]\dotsS[j]$. If however $i > j$, then $S[i..j]$ denotes the \emph{empty string} $\epsilon$. The substring $S[i..j]$ is called \emph{proper} if $S[i..j] \neq S$. A \emph{prefix} of $S$ is a substring $S[1..j]$ (including $S[1..0] = \epsilon$), while the \emph{suffix} $S_i$ is the substring $S[i..n]$ (including $S_{n + 1} = \epsilon$). Given two strings $S$ and $T$ of length $n$ and $m$ respectively, their concatenation is defined as $ST = S[1]\dotsS[n]T[1]\dotsT[m]$. For any positive integer $k$, the $k$-times concatenation of $S$ is denoted by $S^k$. Let $\ell_{\max} = \min(n, m)$. The \emph{longest common prefix (LCP)} of $S$ and $T$ has length $\lcp{S, T} = {\max\{ \ell \mid \ell \in [0, \ell_{\max}] \land S[1..\ell] = T[1..\ell]\}}$, while the \emph{longest common suffix (LCS)} has length $\lcs{S, T} = \max\{ \ell \mid {\ell \in [0, \ell_{\max}]} \land {S_{n - \ell + 1} = T_{m - \ell + 1}}\}$. Let $\ell' = \lcp{S, T}$. For a string $S$ of length $n$ and indices $i, j \in [1, n]$, we define the \emph{longest common right-extension (R-LCE) and left-extension (L-LCE)} as $\rlce{i, j} = \lcp{S_i, S_j}$ and $\llce{i, j} = \lcs{S[1..i], S[1..j]}$ respectively
. The total order on $\Sigma$ induces a \emph{lexicographical order} $\prec$ on the strings over $\Sigma$ in the usual way.
Given three suffixes, we can deduce properties of their R\=/LCEs from their lexicographical order:
\begin{lemma}\label{lemma:lex-deduce-lce}
Let $S_i \prec S_j \prec S_k$ be suffixes a string, then it holds $\rlce{i, k} \leq \rlce{i,j}$ and $\rlce{i,k} \leq \rlce{j, k}$.
\begin{proof}
Assume $\ell = \rlce{i,j} < \rlce{i, k}$, then $S_i[1..\ell] = S_j[1..\ell] = S_k[1..\ell]$ and $S_j[\ell + 1] \neq S_i[\ell + 1] = S_k[\ell + 1]$. This implies $S_i \prec S_j \Leftrightarrow S_k \prec S_j$, which contradicts $S_i \prec S_j \prec S_k$. The proof of $\rlce{i,k} \leq \rlce{j, k}$ works analogously.
\end{proof} \end{lemma}
\subparagraph*{Repetitions and Runs.} Let $S$ be a string and let $S[i..j]$ be a non-empty substring. We say that $p \in \mathbb{N}^{+}$ is a \emph{period} of $S[i..j]$ if and only if ${\forall x \in [i, j - p]} : {S[x] = S[x + p]}$. If additionally $(j - i + 1) \geq p$, then $S[i..i+p)$ is called \emph{string period} of $S[i..j]$. Furthermore, $p$ is called \emph{shortest period} of $S[i..j]$ if there is no $q \in [1, p)$ that is also a period of $S[i..j]$. Analogously, a string period of $S[i..j]$ is called \emph{shortest string period} if there is no shorter string period of $S[i..j]$. A \emph{run} is a triple $\triple{i, j, p}$ such that $p$ is the shortest period of $S[i..j]$, $(j - i + 1) \geq 2p$ (i.e.\ there are at least two consecutive occurrences of the shortest string period $S[i..i + p)$), and neither $\triple{i - 1, j, p}$ nor $\triple{i, j + 1, p}$ satisfies these properties (assuming $i > 1$ and $j < n$, respectively).
\subparagraph*{Lyndon Words and Nearest Smaller Suffixes.} For a length-$n$ string $S$ and $i \in [1, n]$, the string $S_iS[1..i)$ is called \emph{cyclic shift} of $S$, and \emph{non-trivial cyclic shift} if $i > 1$. A \emph{Lyndon word} is a non-empty string that is lexicographically smaller than any of its non-trivial cyclic shifts, i.e.\ $\forall i \in [2,n] : S \prec S_iS[1..i)$. The Lyndon array of $S$ identifies the longest Lyndon word starting at each position of $S$.
\begin{definition}[Lyndon Array]
Given a string $S$ of length $n$, its Lyndon array $\lambda[1,n]$ is defined by $\forall i \in [1, n] :\lyndarr{i} = \max\{ j - i + 1 \mid j \in [i,n] \land S[i..j] \text{ is a Lyndon word} \}$. \end{definition}
An alternative representation of the Lyndon array is the next-smaller-suffix array.
\begin{definition}[Next Smaller Suffixes]\label{def:xss}
Given a string $S$ of length $n$, its \emph{next-smaller-suffix (NSS) array} is defined by ${\forall i \in [1, n]}: {\nss{i} = \min\{j \mid j = n + 1} \lor {(j \in (i, n] \land S_i \succ S_j)\}}$.
If $\nss{i} \leq n$, then $S_{\nss{i}}$ is called \emph{next smaller suffix of $S_i$}. \end{definition}
\begin{lemma}[Lemma 15 \cite{Franek2016}]\label{lemma:lyndonnss}
The longest Lyndon word starting at any position $i \in [1, n]$ of a length-$n$ string $S$ is exactly the substring $S[i..\nss{i})$, i.e.\ $\forall i \in [1, n] : {\lyndarr{i} = \nss{i} - i}$. \end{lemma}
An important property of next smaller suffixes is that they do not \emph{intersect}:
\begin{lemma}\label{lemma:nonintersecting}
Let $i \in [1, n]$ and $i' \in [i, \nss{i})$. Then it holds $\nss{i'} \leq \nss{i}$.
\begin{proof}
Due to $i' \in [i, \nss{i})$ and \cref{def:xss} it holds $S_{i'} \succ S_{\nss{i}}$.
Assume that the lemma does not hold, then we have $\nss{i} \in (i', \nss{i'})$ and \cref{def:xss} implies $S_{i'} \prec S_{\nss{i}}$.
\end{proof} \end{lemma}
\begin{comment} \section{Runs and Longest Lyndon Words}
In this section, we show that there is a close relation between runs and longest Lyndon words. Our observations are focussed on so called \emph{decreasing runs}.
\begin{definition}\label{def:incdec}
Let $\triple{i, j, p}$ be a run in a string $S$.
We say that $\triple{i, j, p}$ is \emph{(lexicographically) decreasing}, if and only if $S_i \succ S_{i + p}$.
Otherwise, we say that $\triple{i, j, p}$ is \emph{(lexicographically) increasing}. \end{definition}
In \toref{lemmas} we will show that decreasing runs induce a specific structure in the Lyndon array. Given a decreasing run $\triple{i, j, p}$, there is exactly one index $i_0 \in [i,i+p)$ with $\lyndarr{i} = p$. For any $k \in \mathbb{N}$ with $i_0 + kp \leq j - p + 1$, we also have $\lyndarr{i_0 + kp} = p$.
\begin{lemma}
Let $\triple{i, j, p}$ be a decreasing run. It holds $\forall k \in [i, j - p + 1] : \lyndarr{k} \leq p$.
\begin{proof}
Consider an arbitrary index $k \in [i, j - p + 1]$. By definition of runs, we have $S[i..k) = S[i + p..k + p)$. Since the run is decreasing it follows
$
{S_i \succ S_{i + p}}
\iff {S[i..k)S_{k} \succ S[i + p..k+p)S_{k + p}}
\iff {S_{k} \succ S_{k + p}}
$.
This implies $\nss{k} \leq k + p$, and due to \cref{lemma:lyndonnss} also $\lyndarr{k} \leq p$.
\end{proof} \end{lemma}
\begin{lemma}\label{lemma:lyndonroots}
Let $\triple{i, j, p}$ be a decreasing run with exponent $e = (j - i + 1) / p$. , then there is exactly one index ${i_0 \in [i..i+p)}$ such that $\lyndarr{i_0} = p$.
\begin{proof}
Consider an arbitrary index $i_0 \in [i, i + p)$. By definition of runs, we have $S[i..i_0) = S[i + p..i_0 + p)$. Since the run is decreasing it follows
$
{S_i \succ S_{i + p}}
\iff {S[i..i_0)S_{i_0} \succ S[i + p..i_0+p)S_{i_0 + p}}
\iff {S_{i_0} \succ S_{i_0 + p}}
$.
This implies $\nss{i_0} \leq i_0 + p$, and due to \cref{lemma:lyndonnss} also $\lyndarr{i_0} \leq p$.
Next, we show that there is at least one index $i_0 \in [i..i+p)$ such that $S[i_0..i_0 + p)$ is a Lyndon word.
Let $\alpha = S[i..i+p)$.
Assume that the described index $i_0$ does not exist, then from $S[i..i + 2p) = \alpha\alpha$ follows that no cyclic shift of $\alpha$ is a Lyndon word.
This however implies that $\alpha$ is of the form $\alpha = \mu^k$ for some string $\mu$ and an integer $k > 1$ (see \cref{lemma:lyndonperiodicity}), which contradicts the fact that $\alpha$ is the \emph{shortest} string period of the run.
Finally, let $\alpha_{k}\alpha[1..k)$ with $k \in [1,p]$ be the unique lexicographically smallest cyclic shift of $\alpha$, then it is easy to see that only $i_0 = i + k - 1$ satisfies $\lyndarr{i_0} = p$.
\end{proof} \end{lemma}
\end{comment}
\section{The Runs Algorithm Revisited} \label{sec:bannairevisit}
In this section, we recapitulate the main ideas of the algorithm by Bannai et al. \cite{Bannai2017}, which is the basis of our solution for general ordered alphabets. The key insight is that every run is \emph{rooted} in a longest Lyndon word, allowing us to compute all runs from the Lyndon array.
\begin{definition}\label{def:incdec}
Let $\triple{i, j, p}$ be a run in a string $S$.
We say that $\triple{i, j, p}$ is \emph{(lexicographically) decreasing} if and only if $S_i \succ S_{i + p}$.
Otherwise, $\triple{i, j, p}$ is \emph{(lexicographically) increasing}. \end{definition}
\begin{comment}
Let $\triple{i, j, p}$ be a decreasing run. It holds $\forall k \in [i, j - p + 1] : \lyndarr{k} \leq p$.
\begin{proof}
Consider an arbitrary index $k \in [i, j - p + 1]$. By definition of runs, we have $S[i..k) = S[i + p..k + p)$. Since the run is decreasing it follows
$
{S_i \succ S_{i + p}}
\iff {S[i..k)S_{k} \succ S[i + p..k+p)S_{k + p}}
\iff {S_{k} \succ S_{k + p}}
$.
This implies $\nss{k} \leq k + p$, and due to \cref{lemma:lyndonnss} also $\lyndarr{k} \leq p$.
\end{proof} \end{comment}
\begin{lemma}\label{lemma:lyndonroots}
Let $\triple{i, j, p}$ be a decreasing run, then there is exactly one index ${i_0 \in [i..i+p)}$ such that $\lyndarr{i_0} = p$.
\begin{proof}
Consider any $i_0 \in [i, i + p)$. By the definition of runs, we have $S[i..i_0) = S[i + p..i_0 + p)$. Since the run is decreasing it follows
$
{S_i \succ S_{i + p}}
\iff {S[i..i_0)S_{i_0} \succ S[i + p..i_0+p)S_{i_0 + p}}
\iff {S_{i_0} \succ S_{i_0 + p}}
$.
This implies $\nss{i_0} \leq i_0 + p$, and due to \cref{lemma:lyndonnss} also $\lyndarr{i_0} \leq p$.
Next, we show that there is at least one index $i_0 \in [i..i+p)$ such that $S[i_0..i_0 + p)$ is a Lyndon word.
Let $\alpha = S[i..i+p)$.
Assume that the described index $i_0$ does not exist, then from $S[i..i + 2p) = \alpha\alpha$ follows that no cyclic shift of $\alpha$ is a Lyndon word.
Let $\beta$ be a lexicographically minimal cyclic shift of $\alpha$, then this shift is not unique (otherwise $\beta$ would be a Lyndon word), and thus there must be a cyclic shift $\beta_k\beta[1..k) = \beta[1..k)\beta_k$ with $k > 1$.
This however implies that $\beta$ is of the form $\beta = \mu^k$ for some string $\mu$ and an integer $k > 1$ (see Lemma 3 in \cite{Lyndon1962}), which contradicts the fact that $\alpha$ is the \emph{shortest} string period of the run.
Finally, let $\alpha_{k}\alpha[1..k)$ with $k \in [1,p]$ be the unique lexicographically smallest cyclic shift of $\alpha$ (and thus a Lyndon word), then it is easy to see that only $i_0 = i + k - 1$ satisfies $\lyndarr{i_0} = p$.
\end{proof} \end{lemma}
\begin{definition}[Root of a Run]
Let $\triple{i, j, p}$ be a decreasing run, and let ${i_0 \in [i..i+p)}$ be the unique index with $\lyndarr{i_0} = p$ (as described in \cref{lemma:lyndonroots}). We say that $\triple{i, j, p}$ is \emph{rooted in} $i_0$. \end{definition}
\begin{figure}
\caption{Lexicographically decreasing run $\triple{5,31,7}$ with $S[5..31] = (\texttt{abcabab})^{27/7}$. The run has shortest string period $\alpha = \examplestr{abcabab}$, and is rooted in position $8$ (with longest Lyndon word $\beta = S[8..15) = \alpha_4\alpha[1..3] = \examplestr{abababc}$).}
\label{fig:runexample}
\end{figure}
An example of a decreasing run and its root is provided in \cref{fig:runexample}. Note that our notion of a root differs from the \textsf{L}-roots introduced by Crochemore et al.\ \cite{Crochemore2014}. While an \textsf{L}-root is \emph{any} length-$p$ Lyndon word contained in the run, our root is exactly the \emph{leftmost} one.
Given a longest Lyndon word $S[i_0..\nss{i_0})$ of length $p = \nss{i_0} - i_0 = \lyndarr{i_0}$, it is easy to determine whether $i_0$ is the root of a decreasing run. We simply try to extend the periodicity as far as possible to both sides by using the LCE functions. For this purpose, we only need to compute $l = \llce{i_0, \nss{i_0}}$ and $r = \rlce{i_0, \nss{i_0}}$. Let $i = i_0 - l + 1$ and $j = \nss{i_0} + r - 1$, then clearly the substring $S[i..j]$ has smallest period $p$, and we cannot extend the substring to either side without breaking the periodicity. Thus, if $j - i + 1 \geq 2p$ then $\triple{i, j, p}$ is a run. Note that this run is only rooted in $i_0$ if additionally $i_0 \in [i..i + p)$ (or equivalently $l \leq p$) holds. For the index $i_0 = 8$ in \cref{fig:runexample}, we have $l = \llce{8, 15} = 4$ and $r = \rlce{8, 15} = 17$. Therefore, the run starts at position $i = 8 - 4 + 1 = 5$ and ends at position $j = 15 + 17 - 1 = 31$. From $l = 4 \leq 7 = p$ follows that $8$ is actually the root.
\iffalse If $S[i_0] \neq S[\nss{i_0}]$, then with the strategy described above we may not find existing periodicities to the left of $i_0$. This is due to the fact that we compute the left-extension as $\llce{i_0, \nss{i_0}}$ and not in the perhaps more natural way $\llce{i_0 - 1, \nss{i_0} - 1}$. Consider for example the string $S = \examplestr{cbcbca}$ and $i_0 = 4$ (with $\nss{4} = 6$). We would not find the run $\triple{1,5,2}$ because of $\llce{4, 6} = 0$. However, we can show that if $S[i_0] \neq S[\nss{i_0}]$ then $i_0$ is not the root of a run. In our example, $i_0 = 1$ is the actual root. Generally it holds:
\begin{lemma}\label{lemma:rootmatch}
Let $i_0$ be the root of a run $\triple{i, j, p}$. It holds $S[i_0] = S[\nss{i_0}]$.
\begin{proof}
By definition of the root, we have $i > i_0 - p$ with $p = \nss{i_0} - i_0$. Assume $S[i_0] \neq S[\nss{i_0}]$, then $j = \nss{i_0} - 1$. It follows $j - i + 1 < \nss{i_0} - i_0 + p = 2p$, which contradicts the definition of runs.
\end{proof} \end{lemma} \fi
Since each decreasing run is rooted in exactly one index, we can find all decreasing runs by checking for each index whether it is the root of a run. This procedure is outlined in \cref{alg:highlevel}. First, we compute the NSS array (line \ref{alg:highlevel:computenss}). Then, we investigate one index $i_0 \in [1, n]$ at a time (line \ref{alg:highlevel:loop}), and consider it as the root of a run with period $p = \nss{i_0} - i_0$ (line \ref{alg:highlevel:period}). If the left-extension covers an entire period (i.e.\ $\llce{i_0, \nss{i_0}} > p$), then we have already investigated the root of the run in an earlier iteration of the for-loop, and no further action is required (line \ref{alg:highlevel:checkllce}). Otherwise, we compute the left and right border of the potential run as described earlier (lines \ref{alg:highlevel:llce}--\ref{alg:highlevel:rlce}). If the resulting interval has length at least $2p$, then we have discovered a run that is rooted in $i_0$ (lines \ref{alg:highlevel:checktwoperiods}--\ref{alg:highlevel:addrun}).
\begin{algorithm} \begin{algorithmic}[1]
\Require{String $S$ of length $n$.}
\Ensure{Set $R$ of all decreasing runs in $S$.}
\State $R \gets \emptyset$
\State compute array \textsf{nss}\label{alg:highlevel:computenss}
\For{$i_0 \in [1, n]$\textbf{ with }$\nss{i_0} \neq n + 1$}\label{alg:highlevel:loop}
\State $p \gets \nss{i_0} - i_0$ \label{alg:highlevel:period}
\If{$\llce{i_0, \nss{i_0}} \leq p$} \label{alg:highlevel:checkllce}
\State $\phantom{j}\mathllap{i} \gets \phantom{\nss{i_0}}\mathllap{i_0\phantom{]}} - \llce{i_0, \nss{i_0}} + 1$ \label{alg:highlevel:llce}
\State $j \gets \nss{i_0} + \rlce{i_0, \nss{i_0}} - 1$\label{alg:highlevel:rlce}
\If{$j - i + 1 \geq 2p$}\label{alg:highlevel:checktwoperiods}
\State $R \gets R \cup \{ \triple{i, j, p} \}$\label{alg:highlevel:addrun}
\EndIf
\EndIf
\EndFor \end{algorithmic} \caption{Compute all decreasing runs.} \label{alg:highlevel} \end{algorithm}
\subparagraph*{Time and space complexity.} The NSS array can be computed in $\orderof{n}$ time and space for general ordered alphabets \cite{Bille2020}. Assume for now that we can answer L-LCE and R-LCE queries in constant time, then clearly the rest of the algorithm also requires $\orderof{n}$ time and space. The correctness of the algorithm follows from \cref{lemma:lyndonroots} and the description. We have shown:
\begin{lemma}\label{lemma:requiredlces}
Let $S$ be a string of length $n$ over a general ordered alphabet, and let $\textnormal{\textsf{nss}}$ be its NSS array. We can compute all increasing runs of $S$ in $\orderof{n} + t(n)$ time and $\orderof{n} + s(n)$ space, where $t(n)$ and $s(n)$ are the time and space needed to compute $\llce{i, \nss{i}}$ and $\rlce{i, \nss{i}}$ for all $i \in [1, n]$ with $\nss{i}\neq n + 1$. \end{lemma}
In order to also find all \emph{increasing} runs, we only need to rerun the algorithm with reversed alphabet order. This way, previously increasing runs become decreasing.
\begin{comment}
\subsection*{Simplifying the Algorithm}
Before we show how to efficiently compute the LCEs required by \cref{alg:highlevel}, we simplify the algorithm such that we only use L-LCEs of the form $\llce{i_0, \nss{i_0}}$ (rather than $\llce{i_0 - 1, \nss{i_0} - 1}$).
Due to the lemma, we can replace the condition in line \ref{alg:highlevel:checkllce} as follows:
\begin{preventpagebreak} \begin{algorithmic}[1] \setcounterref{ALG@line}{alg:highlevel:checkllce} \addtocounter{ALG@line}{-1} \State{\textbf{if }$\phantom{S[i_0]}\mathllap{i_0} = \mathrlap{1}\phantom{S[\nss{i_0}]}\ \lor\ \llce{i_0 - 1, \nss{i_0} - 1} < p$\textbf{ then}}
[before]
\addtocounter{ALG@line}{-1} \State{\textbf{if }$S[i_0] = S[\nss{i_0}]\ \land\ \llce{\mathrlap{i_0}\phantom{i_0 - 1}, \mathrlap{\nss{i_0}}\phantom{\nss{i_0} - 1}} \leq p$\textbf{ then}}
[after] \end{algorithmic} \end{preventpagebreak}
\noindent Due to \cref{lemma:rootmatch}, we know that the additional restriction $S[i_0] = S[\nss{i_0}]$ discards only indices that are not the root of a run. If however an index $i_0$ satisfies $S[i_0] = S[\nss{i_0}]$, then we have $\llce{i_0, \nss{i_0}} - 1 = \llce{i_0 - 1, \nss{i_0} - 1}$. Thus, we can also replace the LCE query in line \ref{alg:highlevel:llce}:
\begin{preventpagebreak} \begin{algorithmic}[1] \setcounterref{ALG@line}{alg:highlevel:llce} \addtocounter{ALG@line}{-1} \State{$i \gets i_0 - \llce{i_0 - 1, \nss{i_0} - 1}$}
[before]
\addtocounter{ALG@line}{-1} \State{$i \gets i_0 - \llce{\mathrlap{i_0}\phantom{i_0 - 1}, \mathrlap{\nss{i_0}}\phantom{\nss{i_0} - 1}} + 1$}
[after] \end{algorithmic} \end{preventpagebreak}
\end{comment}
\section{Algorithm for Computing the LCEs} \label{sec:lcealgo}
In this section, we show how to precompute the LCEs required by \cref{alg:highlevel} in linear time and space. Our approach is asymmetric in the sense that we require different algorithms for L\=/LCEs and R\=/LCEs (whereas previous approaches usually compute L\=/LCEs by applying the R-LCE algorithm to the reverse text). However, for both directions we use similar properties of the Lyndon array that are shown in \cref{lemma:lyndonchainright,lemma:lyndonchainleft} and visualized in \cref{fig:lyndonchain}.
\begin{lemma}\label{lemma:lyndonchainright}
Let $i \in [1, n]$ and $j = \nss{i} \neq n + 1$. If $\rlce{i, j} \geq (j - i)$, then it holds $\rlce{j, j + (j - i)} = \rlce{i, j} - (j - i)$ and $\nss{j} = j + (j - i)$.
\begin{proof}
From $\rlce{i, j} \geq (j - i)$ follows $\rlce{i, j} = (j - i) + \rlce{j, j + (j - i)}$, which is equivalent to $\rlce{j, j + (j - i)} = \rlce{i, j} - (j - i)$.
It remains to be shown that $\nss{j} = j + (j - i)$.
Due to $\nss{i} = j$ it holds $S_i \succ S_j$.
Since ${S_i \succ S_j}$ and $\rlce{i, j} \geq (j - i)$, we have $S_{i + (j - i)} \succ S_{j + (j - i)}$, which implies $\nss{j} \leq j + (j - i)$.
Note that $\nss{i} = j$ and \cref{lemma:lyndonnss} imply that $S[i..j) = S[j..j+(j - i))$ is a Lyndon word. Thus it holds $\lyndarr{j} \geq (j - i)$, or equivalently $\nss{j} \geq j + (j - i)$.
\end{proof} \end{lemma}
\begin{lemma}\label{lemma:lyndonchainleft}
Let $i \in [1, n]$ and $j = \nss{i} \neq n + 1$. If $\llce{i, j} > (j - i)$, then it holds $\llce{i - (j - i), i} = \llce{i, j} - (j - i)$ and $\nss{i - (j - i)} = i$.
\begin{proof}
Analogous to \cref{lemma:lyndonchainright}.
\end{proof} \end{lemma}
\subsection{Computing the R\=/LCEs} \label{sec:rlce}
First, we will briefly describe our general technique for computing LCEs, and our method of showing the linear time bound. Assume for this purpose that we want to compute $\ell = \rlce{i,j}$ with $i < j$. It is easy to see that we can determine $\ell$ by performing $\ell + 1$ individual character comparisons (by simultaneously scanning the suffixes $S_i$ and $S_j$ from left to right until we find a mismatch). Whenever we use this naive way of computing an LCE, we \emph{charge} one character comparison to each of the indices from the interval $[j, j + \ell)$. This way, we account for $\ell$ character comparisons. Since we want to compute $\orderof{n}$ R-LCE values in $\orderof{n}$ time, we can afford a constant time overhead (i.e.\ a constant number of unaccounted character comparisons) for each LCE computation. Thus, there is no need to charge the $(\ell + 1)$-th comparison to any index. At the time at which we want to compute~$\ell$, we may already know some lower bound $k \leq \ell$. In such cases, we simply skip the first $k$ character comparisons and compute $\ell = k + \rlce{i + k, j + k}$. This requires $\ell - k + 1$ character comparisons, of which we charge $\ell - k$ to the interval $[j + k..j+\ell)$.
\begin{figure}
\caption{An edge from text position $a$ to text position $b$ indicates $\nss{a} = b$.}
\label{fig:lyndonchain}
\label{fig:orderofcomputation}
\end{figure}
Ultimately, we will show that all R\=/LCE values $\rlce{i, j}$ with $i \in [1, n]$ and $j = \nss{i} \neq n + 1$ can be computed in a way such that each text position gets charged at most once, which results in the desired linear time bound. From now on, we refer to $i$ as the \emph{left index} and $j$ as the \emph{right index} of the R\=/LCE computation. Our algorithm computes the R\=/LCEs in the following order (a visualization is provided in \cref{fig:orderofcomputation}): We consider the possible right indices $j \in [2, n]$ one at a time and in \emph{increasing} order. For each right index $j$, we then consider the corresponding left indices $i$ with $\nss{i} = j$ in \emph{decreasing} order (we will see how to efficiently deduce this order from the Lyndon array later).
Assume that we are computing the R\=/LCEs in the previously described order, and let $\ell = \rlce{i, j}$ with $j = \nss{i} \neq n + 1$ be the next value that we want to compute. The set of indices that we have already considered as left indices for LCE computations is $I = \{ x \mid (\nss{x} < j) \lor ((\nss{x} = j) \land (i < x))\}$. For example, when we compute $\rlce{i_4, j_2}$ in \cref{fig:orderofcomputation} it holds $\{i_1, i_2, i_3\} \subseteq I$. At this point in time, the rightmost text position that we have already inspected is $\xhs{\rightarrow} = \max_{x \in I}(\nss{x} + \rlce{x, \nss{x}})$ if $I \neq \emptyset$, or $\xhs{\rightarrow} = 1$ otherwise.
Due to the nature of our charging method, we have not charged any indices from the interval $[\xhs{\rightarrow}, n]$ yet. Thus, in order to show that we can compute all LCEs without charging any index twice, it suffices to show how to compute $\ell = \rlce{i, j}$ without charging any index from the interval $[1, \xhs{\rightarrow})$. If $j \geq \xhs{\rightarrow}$ then we naively compute $\ell$ and charge the character comparisons to the interval $[j, j+\ell)$, thus only charging previously uncharged indices. The new value of $\xhs{\rightarrow}$ is $j + \ell$. If however $j < \xhs{\rightarrow}$, then the computation of $\ell$ depends on the previously computed LCEs, which we describe in the following.
Let $\ell' = \rlce{i', j'}$ with $j' = \nss{i'}$ be the \emph{most recently} computed R\=/LCE that satisfies $j' + \ell' = \xhs{\rightarrow}$.
Our strategy for computing $\ell$ depends on the position of $i$ relative to $i'$ and $j'$. First, note that $i \notin [i', j')$ because otherwise
\cref{lemma:nonintersecting} implies $j \leq j'$, which contradicts our order of computation. This leaves us with three possible cases (as before, a directed edge from text position $a$ to text position $b$ indicates $\nss{a} = b$):
\begin{center} \noindent\shortstack{\vphantom{\shortstack{{I}\\{I}\\{I}}}$S=\stringbox{}{ \tikzmark{s1}\stringidx{i}\quad\enskip \tikzmark{s2}\stringidx{i'}\quad\enskip \tikzmark{t2}\stringidx{j'}\quad \tikzmark{t1}\stringidx{j}\quad \stringidx{\xhs{\rightarrow}}}{} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {1,2} {
\path (s\x) ++(1pt, 0) node (s\x) {};
\path (t\x) ++(1pt, 0) node (t\x) {} ++(0,-10pt) node (m\x) {};
\draw[-Latex] (s\x) to ++(0, -10pt) to[out=270, in=270, looseness=.7] (m\x.center) to (t\x);
} \end{tikzpicture} $\\${\vphantom{\rhs}\oldstrut}^{\vphantom{\rhs}\oldstrut}$\\\shortstack{\textbf{\case{R1} \bm{i < i'}}\vphantom{j}\\(possibly $j' = j$)}}
\shortstack{$S=\stringbox{}{ \quad \tikzmark{s2}\stringidx{i'}\quad\enskip \tikzmark{t2}\tikzmark{s1}\stringidx{j' = i}\quad\enskip \tikzmark{t1}\stringidx{j}\quad \stringidx{\xhs{\rightarrow}}}{} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {1,2} {
\path (s\x) ++(1pt, 0) node (s\x) {};
\path (t\x) ++(1pt, 0) node (t\x) {} ++(0,-15pt) node (m\x) {};
}
\draw[-Latex] (s2) to ++(0, -15pt) to[out=270, in=270, looseness=.7] (m2.center) to (t2);
\draw[-Latex] (s1) ++(1pt, -10pt) to ++(0,-5pt) to[out=270, in=270, looseness=.7] (m1.center) to (t1); \end{tikzpicture} $\\${\vphantom{\rhs}\oldstrut}^{\vphantom{\rhs}\oldstrut}$\\\shortstack{\textbf{\case{R2} \bm{i = j'}}\\\phantom{(y}}}
\shortstack{$S=\stringbox{}{ \tikzmark{s2}\stringidx{i'}\quad\quad\enskip \tikzmark{t2}\stringidx{j'}\quad \tikzmark{s1}\stringidx{i}\enskip \tikzmark{t1}\stringidx{j}\quad \stringidx{\xhs{\rightarrow}}}{} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {1,2} {
\path (s\x) ++(1pt, 0) node (s\x) {};
\path (t\x) ++(1pt, 0) node (t\x) {} ++(0,-10pt) node (m\x) {};
\draw[-Latex] (s\x) to ++(0, -10pt) to[out=270, in=270, looseness=.7] (m\x.center) to (t\x);
} \end{tikzpicture} $\\${\vphantom{\rhs}\oldstrut}^{\vphantom{\rhs}\oldstrut}$\\\shortstack{\textbf{\case{R3} \bm{i > j'}}\\\phantom{(y}}} \end{center}
Now we explain the cases in detail. Each case is accompanied by a schematic drawing. We strongly advise the reader to study the drawings alongside the description, since they are essential for an easy understanding of the matter.
\newcommand{.6em}{.6em} \newcommand{.3em}{.3em} \newcommand{1.75mm}{1.75mm} \newcommand{7.5mm}{2em} \newcommand{\alphabox}[1]{\stringbox[.6em]{\stringidx{#1}}{\alpha}{}} \newcommand{\betabox}[1]{\stringbox[.3em]{}{\beta}{}}
\newtcolorbox{itembox}[1][]{enhanced jigsaw, breakable=true, left=0mm,top=1mm,right=0mm,bottom=1.5mm, interior hidden, sharp corners=all, colframe=black!30!white,nobeforeafter=,
#1}
\newenvironment{nitembox}{\parindent0pt{\textcolor{black!30!white}{\hrulefill}\\[.5\baselineskip]}}{}
\begin{itembox}
\case{R1} \bm{i < i'} (and $j' \leq j < \xhs{\rightarrow}$)\textbf{.}
$\absolute{\alpha} = j - j'$, $\absolute{\beta} = \xhs{\rightarrow} - j$
$\stringstrut S = \stringbox[1.75mm]{}{}{} \tikzmark{m1} \stringbox[.3em]{\stringidx{i}}{\beta}{} \stringbox[.5mm]{}{\gamma}{} \stringbox[1.75mm]{}{}{} \tikzmark{m2} \alphabox{i'}\stringbox[.3em]{\stringidx{\ \ (i' + j - j')}}{\beta}{} \stringbox[3mm]{}{}{} \tikzmark{m3} \alphabox{j'} \tikzmark{m4} \stringbox[.3em]{\stringidx{j}}{\beta}{} \stringbox[.5mm]{\stringidx{\xhs{\rightarrow}}}{\gamma}{} \stringbox[1.75mm]{}{}{} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {1,2,3,4} {
\path (m\x) ++(3pt,-0pt) node (m\x) {};
}
\draw (m1) edge[looseness = .5, out = 270, in = 270, -Latex] (m4);
\draw (m2) edge[looseness = .6, out = 270, in = 270, -Latex] (m3);
\end{tikzpicture}$
$\ell' = \absolute{\alpha\beta}$, $\ell = \absolute{\beta\gamma}$
Due to $i < (i' + j - j') < j = \nss{i}$ we have $S_j \prec S_{i} \prec S_{i' + j - j'}$.
From \cref{lemma:lex-deduce-lce} follows $\xhs{\rightarrow} - j = \rlce{i' + j - j', j} \leq \rlce{i, j} = \ell$, i.e.\ both $S_i$ and $S_j$ start with~$\beta$.
Since now we know a lower bound $\xhs{\rightarrow} - j \leq \ell$ of the desired LCE value, we can skip character comparisons during its computation.
Later, we will see that the same bound also holds for most of the other cases. Generally, whenever we can show $\xhs{\rightarrow} - j \leq \ell$ we use the following strategy.
We compute ${\ell = (\xhs{\rightarrow} - j) + \rlce{i + (\xhs{\rightarrow} - j), \xhs{\rightarrow}}}$ using $\ell - (\xhs{\rightarrow} - j) + 1$ character comparisons, of which we charge $\ell - (\xhs{\rightarrow} - j)$ to the interval $[\xhs{\rightarrow},j+\ell)$.
Thus we only charge previously uncharged positions.
We continue with $i' \gets i$, $j' \gets j$, $\ell' \gets \ell$, and $\xhs{\rightarrow} \gets j + \ell$.
\end{itembox}
\begin{itembox}
\case{R2} \bm{i = j'}\textbf{.} We divide this case into two subcases.
{\textcolor{black!30!white}{\hrulefill}\\[-.5\baselineskip]}
\case{R2a} \bm{\ell' < j' - i'}.
$\absolute{\alpha} = j - j'$, $\absolute{\beta} = \xhs{\rightarrow} - j$
$\stringstrut S = \stringbox[.2cm]{}{}{} \tikzmark{m1} \stringbox[.35cm]{\stringidx{i'}}{\alpha}{} \stringbox[.2cm]{\stringidx{(i' + j - i)}}{\beta}{} \stringbox[.3cm]{}{}{} \tikzmark{m2}\tikzmark{m3} \stringbox[.35cm]{\stringidx{j' = i}}{\alpha}{} \tikzmark{m4} \stringbox[.2cm]{\stringidx{j}}{\beta}{} \stringbox[.2cm]{\stringidx{\xhs{\rightarrow}}}{}{} \begin{tikzpicture}[overlay, remember picture]
\path (m2) ++(-1pt, 0) node (m2) {};
\foreach \x in {1,2,3,4} {
\path (m\x) ++(3pt,-0pt) node (m\x) {} ++(0, -12pt) node (im\x) {};
}
\path (m3) ++(0, -6.5pt) node (m3) {};
\draw[-Latex] (m1) to (im1.center) to[looseness = .5, out = 270, in = 270, -Latex] (im2.center) to (m2);
\draw[-Latex] (m3) to (im3.center) to[looseness = .5, out = 270, in = 270, -Latex] (im4.center) to (m4);
\end{tikzpicture}$
\phantom{$\ell = \ell' + \absolute{\beta}$}
From $j < \xhs{\rightarrow} \implies j - i < \xhs{\rightarrow} - i = \ell'$ and $\ell' < j' - i'$ follows $i' + j - i < j' = i$.
Therefore, $\nss{i'} = i$ and \cref{def:xss} imply $S_{i} \prec S_{i' + j - 1}$.
Due to $\nss{i} = j$ we also have $S_j \prec S_{i}$, such that it holds $S_j \prec S_{i} \prec S_{i' + j - 1}$.
It is easy to see that $S_{i' + j - i}$ and $S_j$ share a prefix $\beta$ of length $\rlce{i' + j - i, j} = \xhs{\rightarrow} - j$.
In fact, also $S_i$ has prefix $\beta$ because \cref{lemma:lex-deduce-lce} implies that $\rlce{i' + j - i, j} \leq \rlce{i, j} = \ell$.
Thus it holds $\xhs{\rightarrow} - j \leq \ell$, which allows us to use the strategy from Case R1.
{\textcolor{black!30!white}{\hrulefill}\\[-.5\baselineskip]}
\case{R2b} \bm{\ell' \geq j' - i'}.
$\absolute{\beta} = j' - i'$, $\ell = \ell' - \absolute{\beta}$
$\stringstrut S = \stringbox[.5cm]{}{}{} \tikzmark{m1} \stringbox[.5cm]{\stringidx{i'}}{\beta}{} \tikzmark{m2}\tikzmark{m3} \stringbox[.5cm]{\stringidx{j' = i}}{\beta}{} \tikzmark{m4} \stringbox[.4cm]{\stringidx{j}}{}{} \stringbox[.2cm]{\stringidx{\xhs{\rightarrow}}}{}{} \begin{tikzpicture}[overlay, remember picture]
\path (m2) ++(-1pt, 0) node (m2) {};
\foreach \x in {1,2,3,4} {
\path (m\x) ++(3pt,-0pt) node (m\x) {} ++(0, -12pt) node (im\x) {};
}
\path (m3) ++(0, -6.5pt) node (m3) {};
\draw[-Latex] (m1) to (im1.center) to[looseness = .5, out = 270, in = 270, -Latex] (im2.center) to (m2);
\draw[-Latex] (m3) to (im3.center) to[looseness = .5, out = 270, in = 270, -Latex] (im4.center) to (m4);
\end{tikzpicture}$
\phantom{$\ell = \ell' + \absolute{\beta}$}
Due to $\ell' \geq j' - i'$, \cref{lemma:lyndonchainright} implies $j = i + (j' - i')$ and $\ell = \ell' - (j' - i')$. Since $i'$, $j'$, and $\ell'$ are known, we can compute $\ell$ in constant time without performing any character comparisons. We continue with $i' \gets i$, $j' \gets j$, and $\ell' \gets \ell$ (leaving $\xhs{\rightarrow}$ unchanged).
\end{itembox}
\begin{itembox}
\begin{comment}{R}{.37\textwidth}
\shortstack{$S = \stringbox{}{ \tikzmark{s2}\stringidx{i'}\quad \tikzmark{s1}\stringidx{\ i''}\quad \tikzmark{t1}\stringidx{\ j''}\quad\enskip \tikzmark{t2}\stringidx{j'}\quad \tikzmark{s1}\stringidx{i}\quad \tikzmark{t1}\stringidx{j}\quad\enskip \stringidx{\xhs{\rightarrow}} }{} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {1,2} {
\path (s\x) ++(1pt, 0) node (s\x) {};
\path (t\x) ++(1pt, 0) node (t\x) {} ++(0,-10pt) node (m\x) {};
\draw[-Latex] (s\x) to ++(0, -10pt) to[out=270, in=270, looseness=.7] (m\x.center) to (t\x);
} \end{tikzpicture}$}
\end{comment}
\case{R3} \bm{i > j'}.
This is the most complicated case, and it is best explained by dividing it into three subcases.
Let $d = j' - i'$, $i'' = i - d$, $j'' = j - d$, and $\ell'' = \rlce{i'', j''}$.
{\textcolor{black!30!white}{\hrulefill}\\[-.5\baselineskip]}
\case{R3a} \textbf{\boldmath$\nss{i''} \neq j''$\unboldmath:}
$\absolute{\alpha} = \ell'$, $\absolute{\beta} = \absolute{\gamma} = \xhs{\rightarrow} - j$
$\stringstrut S = \stringbox[1mm]{}{}{} \stringbox[7.5mm]{\mathrlap{ \hspace{3mm}\mathrlap{\stringidx{\ i''}} \phantom{\stringbox[.75mm]{}{\beta}{} \stringbox[0mm]{}{\gamma}{}} \hspace{2mm}\mathrlap{\stringidx{j''}} }\stringidx{i'}}{\alpha}{} \stringbox[5mm]{\stringidx{\ \ \ \ (i' + \ell')}}{}{} \stringbox[7.5mm]{\mathrlap{ \hspace{3mm}\mathrlap{\stringidx{i}} \phantom{\stringbox[.75mm]{}{\beta}{} \stringbox[0mm]{}{\gamma}{}} \hspace{2mm}\mathrlap{\stringidx{j}} }\stringidx{j'}}{\alpha}{} \stringbox[1mm]{\stringidx{\xhs{\rightarrow}}}{}{}$
$\ell'' \geq \absolute{\beta}$, $\ell \geq \absolute{\beta}$
$\stringstrut \mathrlap{ \hspace{3mm}\tikzmark{m2}\stringbox[.75mm]{}{\beta}{}\phantom{\stringbox[0mm]{}{\gamma}{}} \hspace{2mm}\stringbox[.75mm]{}{\mathrlap{\gamma}\phantom{\beta}}{}\quad\tikzmark{m3} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {2,3} {
\path (m\x) ++(3pt,-0pt) node (m\x) {};
}
\draw (m2) edge[looseness=.6, out = 270, in = 270, -Latex] (m3); \end{tikzpicture} } \tikzmark{m1}\phantom{\stringbox[7.5mm]{\stringidx{i'}}{\alpha}{} \stringbox[5mm]{}{}{}} \mathrlap{ \hspace{3mm}\tikzmark{m5}\stringbox[.75mm]{}{\beta}{} \phantom{\stringbox[0mm]{}{\gamma}{}} \hspace{2mm}\tikzmark{m6}\stringbox[.75mm]{}{\mathrlap{\gamma}\phantom{\beta}}{} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {5,6} {
\path (m\x) ++(3pt,-0pt) node (m\x) {};
}
\draw (m5) edge[out = 270, in = 270, -Latex] (m6); \end{tikzpicture} } \tikzmark{m4}\phantom{\stringbox[7.5mm]{\stringidx{j'}}{\alpha}{} \stringbox[1mm]{}{}{}} \begin{tikzpicture}[overlay, remember picture]
\path (m1) ++(3pt,12pt) node (m1) {};
\path (m4) ++(3pt,-2pt) node (m4) {};
\draw[-Latex] (m1) to ++(0,-14pt) to[looseness = .7, out=270, in=270] (m4.center) to ++(0,11pt);
\end{tikzpicture}$
First, note that $S[i'..i' + \ell') = S[j'..\xhs{\rightarrow})$ implies $S[i..j) = S[i''..j'')$.
From $\nss{i} = j$ follows that $S[i..j) = S[i''..j'')$ is a Lyndon word.
Thus, due to \cref{lemma:lyndonnss} and $\nss{i''} \neq j''$ it holds $\nss{i''} > j''$, which implies $S_{i''} \prec S_{j''}$.
Let $\beta = S[i''..i''+ \xhs{\rightarrow} - j) = S[i..i + \xhs{\rightarrow} - j)$ and let $\gamma = S[j''..i'+ \ell') = S[j..\xhs{\rightarrow})$.
From $S_{i''} \prec S_{j''}$ follows $\beta \preceq \gamma$, while $S_i \succ S_j$ implies $\beta \succeq \gamma$.
Thus it holds $\beta = \gamma$, and therefore $\rlce{i, j} \geq \absolute{\gamma} = \xhs{\rightarrow} - j$.
This means that we can use the strategy from Case R1.
{\textcolor{black!30!white}{\hrulefill}\\[-.5\baselineskip]}
\case{R3b} \textbf{\boldmath$\nss{i''} = j''$ and\\$(j'' + \ell'') < (i' + \ell')$\unboldmath:}
$\stringstrut S = \stringbox[1mm]{}{}{} \stringbox[7.5mm]{\mathrlap{ \hspace{4mm}\mathrlap{\stringidx{i''}} \phantom{\stringbox[.2em]{}{\beta}{}} \hspace{3mm}\mathrlap{\stringidx{j''}} }\stringidx{i'}}{\alpha}{} \stringbox[5mm]{\stringidx{\ \ \ (i' + \ell')}}{}{} \stringbox[7.5mm]{\mathrlap{ \hspace{4mm}\mathrlap{\stringidx{i}} \phantom{\stringbox[.2em]{}{\beta}{}} \hspace{3mm}\mathrlap{\stringidx{j}} }\stringidx{j'}}{\alpha}{} \stringbox[1mm]{\stringidx{\xhs{\rightarrow}}}{}{}$
$\absolute{\alpha} = \ell'$, $\absolute{\beta} = \ell'' = \ell$
$\stringstrut \mathrlap{ \hspace{4mm}\tikzmark{m2}\stringbox[.2em]{}{\beta}{} \hspace{3mm}\tikzmark{m3}\stringbox[.2em]{}{\beta}{} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {2,3} {
\path (m\x) ++(3pt,-0pt) node (m\x) {};
}
\draw (m2) edge[out = 270, in = 270, -Latex] (m3); \end{tikzpicture} } \tikzmark{m1}\phantom{\stringbox[7.5mm]{\stringidx{i'}}{\alpha}{} \stringbox[5mm]{}{}{}} \mathrlap{ \hspace{4mm}\tikzmark{m5}\stringbox[.2em]{}{\beta}{} \hspace{3mm}\tikzmark{m6}\stringbox[.2em]{}{\beta}{} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {5,6} {
\path (m\x) ++(3pt,-0pt) node (m\x) {};
}
\draw (m5) edge[out = 270, in = 270, -Latex] (m6); \end{tikzpicture} } \tikzmark{m4}\phantom{\stringbox[7.5mm]{\stringidx{j'}}{\alpha}{} \stringbox[1mm]{}{}{}} \begin{tikzpicture}[overlay, remember picture]
\path (m1) ++(3pt,12pt) node (m1) {};
\path (m4) ++(3pt,0pt) node (m4) {};
\draw[-Latex] (m1) to ++(0,-12pt) to[looseness = .7, out=270, in=270] (m4.center) to ++(0,9pt);
\end{tikzpicture}$
Due to $\ell'' = \rlce{i'', j''}$, there is a shared prefix $\beta = S[i''..i'' + \ell'') = S[j''..j'' + \ell'')$ between $S_{i''}$ and $S_{j''}$, and the first mismatch between the two suffixes is $S[i'' + \ell''] \neq S[j'' + \ell'']$.
Because of $(j'' + \ell'') < (i' + \ell')$, both the shared prefix and the mismatch are contained in $S[i'..i' + \ell')$ (i.e.\ in the first occurrence of $\alpha$).
If we consider the substring $S[j'..j' + \ell')$ instead (i.e.\ the second occurrence of $\alpha$), then $S_i$ and $S_j$ clearly also share the prefix $\beta = S[i..i + \ell'') = S[j..j + \ell'')$, with the first mismatch occurring at $S[i + \ell''] \neq S[j + \ell'']$.
Thus it holds $\ell = \ell''$.
Due to $\nss{i''} = j''$ and our order of R\=/LCE computations, we have already computed $\ell''$.
Therefore, we can simply assign $\ell \gets \ell''$ and continue without changing $i'$, $j'$, $\ell'$, and $\xhs{\rightarrow}$.
{\textcolor{black!30!white}{\hrulefill}\\[-.5\baselineskip]}
\case{R3c} \textbf{\boldmath$\nss{i''} = j''$ and\\$(j'' + \ell'') \geq (i' + \ell')$\unboldmath:}
$\stringstrut S = \stringbox[1mm]{}{}{} \stringbox[7.5mm]{\mathrlap{ \hspace{3mm}\mathrlap{\stringidx{\ i''}} \phantom{\stringbox[.75mm]{}{\beta}{} \stringbox[0mm]{}{\gamma}{}} \hspace{2mm}\mathrlap{\stringidx{j''}} }\stringidx{i'}}{\alpha}{} \stringbox[5mm]{\stringidx{\ \ \ \ (i' + \ell')}}{}{} \stringbox[7.5mm]{\mathrlap{ \hspace{3mm}\mathrlap{\stringidx{i}} \phantom{\stringbox[.75mm]{}{\beta}{} \stringbox[0mm]{}{\gamma}{}} \hspace{2mm}\mathrlap{\stringidx{j}} }\stringidx{j'}}{\alpha}{} \stringbox[1mm]{\stringidx{\xhs{\rightarrow}}}{}{}$
\smash{$\absolute{\alpha} = \ell'$, $\absolute{\beta} = \xhs{\rightarrow} - j$, $\absolute{\beta\gamma} = \ell''$}
$\stringstrut \mathrlap{ \hspace{3mm}\tikzmark{m2}\stringbox[.75mm]{}{\beta}{}\stringbox[0mm]{}{\gamma}{} \hspace{2mm}\tikzmark{m3}\stringbox[.75mm]{}{\beta}{}\stringbox[0mm]{}{\gamma}{} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {2,3} {
\path (m\x) ++(3pt,-0pt) node (m\x) {};
}
\draw (m2) edge[out = 270, in = 270, -Latex] (m3); \end{tikzpicture} } \tikzmark{m1}\phantom{\stringbox[7.5mm]{\stringidx{i'}}{\alpha}{} \stringbox[5mm]{}{}{}} \mathrlap{ \hspace{3mm}\tikzmark{m5}\stringbox[.75mm]{}{\beta}{} \phantom{\stringbox[0mm]{}{\gamma}{}} \hspace{2mm}\tikzmark{m6}\stringbox[.75mm]{}{\beta}{} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {5,6} {
\path (m\x) ++(3pt,-0pt) node (m\x) {};
}
\draw (m5) edge[out = 270, in = 270, -Latex] (m6); \end{tikzpicture} } \tikzmark{m4}\phantom{\stringbox[7.5mm]{\stringidx{j'}}{\alpha}{} \stringbox[1mm]{}{}{}} \begin{tikzpicture}[overlay, remember picture]
\path (m1) ++(3pt,12pt) node (m1) {};
\path (m4) ++(3pt,-2pt) node (m4) {};
\draw[-Latex] (m1) to ++(0,-14pt) to[looseness = .7, out=270, in=270] (m4.center) to ++(0,11pt);
\end{tikzpicture}$
$\ell \geq \absolute{\beta}$
This situation is similar to Case R3b.
There is a shared prefix $\beta = {S[i''..i'' + \xhs{\rightarrow} - j)} = S[j''..i' + \ell')$ between the suffixes $S_{i''}$ and $S_{j''}$.
They may share an even longer prefix $\beta\gamma$, but only the first $\absolute{\beta} = \xhs{\rightarrow} - j$ symbols of their LCP are contained in $S[i'..i' + \ell')$ (i.e.\ in the first occurrence of $\alpha$).
If we consider the substring $S[j'..j' + \ell')$ instead (i.e.\ the second occurrence of $\alpha$), then $S_i$ and $S_j$ clearly also share at least the prefix $\beta = S[i..i + \xhs{\rightarrow} - j) = S[j..\xhs{\rightarrow})$.
Thus it holds $\xhs{\rightarrow} - j \leq \ell$, and we can use the strategy from Case R1. \end{itembox}
We have shown how to compute $\ell$ without charging any index twice. It follows that the total number of character comparisons for all R\=/LCEs is $\orderof{n}$.
\subparagraph*{A Simple Algorithm for R\=/LCEs.} While the detailed differentiation between the six subcases helps to show the correctness of our approach, it can be implemented in a significantly simpler way (see \cref{alg:rlce}). At all times, we keep track of $j'$, $\xhs{\rightarrow}$ and the distance $d = j' - i'$ (line~\ref{alg:rlce:init}). We consider the indices $j \in [2, n]$ in increasing order (line~\ref{alg:rlce:outerloop}). For each index $j$, we then consider the indices $i$ with $\nss{i} = j$ in decreasing order (line~\ref{alg:rlce:innerloop}). Each time we want to compute an R\=/LCE value $\ell = \rlce{i, j}$, we first check whether Case R3b applies~(line~\ref{alg:rlce:case3a}). If it does, then we simply copy the previously computed R\=/LCE value $\rlce{i - d, j - d}$ (line \ref{alg:rlce:copylce}). Otherwise, we either compute the LCE naively (if $j \geq \xhs{\rightarrow}$), or we have to apply the strategy from Case R1 (since all other cases except for Case R2b use this strategy; in Case R2b it holds $\xhs{\rightarrow} - j = \ell$, which means that it can also be solved with the strategy from Case R1). If $j \geq \xhs{\rightarrow}$ then in lines \ref{alg:rlce:computelce1}--\ref{alg:rlce:computelce2} we have $k = 0$, and thus we naively compute $\rlce{i, j}$ by scanning. If however $j < \xhs{\rightarrow}$, then we have $k = \xhs{\rightarrow} - j$, and we skip $k$ character comparisons. In any case, we update the values $j'$, $\xhs{\rightarrow}$, and $d$ accordingly (line \ref{alg:rlce:updatevars}).
\begin{algorithm} \newcommand{\naiverlce}[1]{\textsc{naive-scan-}\rlce{#1}} \newcommand{j_{\text{prev}}}{j_{\text{prev}}} \newcommand{k}{k} \newcommand{a}{a} \let\oldstrut\vphantom{\rhs}\oldstrut \renewcommand{\vphantom{\rhs}\oldstrut}{\vphantom{\xhs{\rightarrow}}\oldstrut} \begin{algorithmic}[1]
\Require{String $S$ of length $n$ and its NSS array \textsf{nss}.}
\Ensure{R\=/LCE value $\rlce{i, \nss{i}}$ for each index $i \in [1, n]$ with $\nss{i} \neq n + 1$.}
\State $j' \gets 0;\ \ \xhs{\rightarrow} \gets 1;\ \ d \gets 0$\label{alg:rlce:init}
\For{$j \in [2, n]$ in increasing order}\label{alg:rlce:outerloop}
\For{$i$ \textbf{with} $\nss{i} = j \neq n + 1$ in decreasing order}\label{alg:rlce:innerloop}
\If{$\left(\parbox[m]{4.2cm}{
\begin{alignat*}{2}
\vphantom{\rhs}\oldstrut &i, j \in (j', \xhs{\rightarrow})\\[-.25\baselineskip]
\vphantom{\rhs}\oldstrut\land\ & \nss{i - d} = j - d\\[-.25\baselineskip]
\vphantom{\rhs}\oldstrut\land\ & j + \rlce{i - d, j - d} < \xhs{\rightarrow}
\end{alignat*}
}\right)$}\label{alg:rlce:case3a}
\State $\rlce{i, j} \gets \rlce{i - d, j - d}$\label{alg:rlce:copylce}
\Else
\State $\strutk \gets \max(\xhs{\rightarrow}, j) - j$ \label{alg:rlce:computelce1}
\State $\vphantom{\rhs}\oldstrut\rlce{i, j} \gets k + \naiverlce{i + k, j + k}$ \label{alg:rlce:computelce2}
\State $\vphantom{\rhs}\oldstrut\mathrlap{j'}\phantom{\xhs{\rightarrow}} \gets j;\ \ \xhs{\rightarrow} \gets j + \rlce{i, j};\ \ \mathrlap{d}\phantom{\xhs{\rightarrow}} \gets j - i$ \label{alg:rlce:updatevars}
\EndIf
\EndFor
\EndFor \end{algorithmic} \caption{Compute all R\=/LCEs.} \label{alg:rlce} \end{algorithm}
The correctness of the algorithm follows from the description of Cases 1--3. Since for each left index $i$ we have to store at most one R\=/LCE, we can simply maintain the LCEs in a length-$n$ array, where the $i$-th entry is $\rlce{i, \nss{i}}$. This way, we use linear space and can access the R\=/LCE that is required in line~\ref{alg:rlce:copylce} in constant time. Apart from the at most $n$ character comparisons that we charge to the indices, we only need a constant number of additional primitive operations per computed R\=/LCE. The order of iteration can be realized by first generating all $(i, \nss{i})$-pairs, and then using a linear time radix sorter to sort the pairs in increasing order of their second component and decreasing order of their first component. We have shown:
\begin{lemma}\label{lemma:linearrlce}
Given a string of length $n$ and its NSS array \textnormal{\textsf{nss}}, we can compute $\rlce{i, \nss{i}}$ for all indices $i \in [1, n]$ with $\nss{i} \neq n + 1$ in $\orderof{n}$ time and space. \end{lemma}
\subsection{Computing the L\=/LCE Values} \label{sec:llce}
Our solution for the L\=/LCEs is similar to the one for R\=/LCEs, but differs in subtle details. We generally compute $\ell = \llce{i, j}$ by simultaneously scanning the prefixes $S[1..i]$ and $S[1..j]$ from right to left until we find the first mismatch. This takes $\ell + 1$ character comparisons, of which we charge $\ell$ comparisons to the interval $(i - \ell, i]$. As before, if some lower bound $k \leq \ell$ is known then we skip $k$ character comparisons. In this case, we compute the L\=/LCE as $\ell = k + \llce{i - k, j - k}$, and charge $\ell - k$ comparisons to the interval $(i - \ell, i - k]$.
Again, we will show how to compute all values $\llce{i, \nss{i}}$ with $i \in [1, n]$ and $\nss{i} \neq n + 1$ such that each index gets charged at most once. In contrast to the more complex R\=/LCE iteration order, we can simply compute the L\=/LCE values in \emph{decreasing} order of~$i$. Thus, when we want to compute $\ell = \llce{i, j}$ with $j = \nss{i} \neq n + 1$, we have already considered the indices $I = \{ x \mid x \in (i, n] \land \nss{x} \neq n + 1 \}$ as left indices of L\=/LCE computations. The leftmost text position that we have already inspected so far at this point is $\xhs{\leftarrow} = \min_{x \in I}(x - \llce{x, \nss{x}})$ if $I \neq \emptyset$, or $\xhs{\leftarrow} = n$ otherwise.
Due to our charging method, we have not charged any index from the interval $[1, \xhs{\leftarrow}]$ yet. Thus, we only have to show how to compute $\ell$ without charging indices from $(\xhs{\leftarrow}, n]$. Let $\ell' = \llce{i', j'}$ be the most recently computed L\=/LCE that satisfies $i' - \ell' = \xhs{\leftarrow}$. If $i \leq \xhs{\leftarrow}$ then we compute $\ell$ naively and charge the character comparisons to the interval $(i - \ell, i]$ (thus only charging previously uncharged indices). If however $i > \xhs{\leftarrow}$, then our strategy is more complicated. Before explaining it in detail, we show three important properties that hold in the present situation.
First, we show that ${i \geq i' - (j' - i')}$. Assume the opposite (as visualized in \cref{fig:llce:sup1}), then from ${\xhs{\leftarrow} = i' - \ell' < i}$ follows ${\ell' > j' - i'}$. Thus, \cref{lemma:lyndonchainleft} implies ${\nss{i' - (j' - i')} = i'}$ (dashed edge) and ${\llce{i' - (j' - i'), i'}} = \ell' - (j' - i')$. Due to our order of computation and $i < i' - (j' - i')$ we must have already computed this L-LCE. However, it holds $i' - (j' - i') - \llce{i' - (j' - i'), i'} =
\xhs{\rightarrow}$, which contradicts the fact that $\ell' = \llce{i', j'}$ is the \emph{most recently} computed L-LCE with $i' - \ell' = \xhs{\leftarrow}$.
Next, we show that $j \leq i'$. First, note that $j \notin (i', j')$, since due to $i < i'$ we would otherwise contradict \cref{lemma:nonintersecting}. Thus we only have to show ${j < j'}$. Assume for this purpose that $j \geq j'$ (as visualized in \cref{fig:llce:sup2}). From ${j' - i' + i} \in (i, \nss{i})$ and \cref{def:xss} follows ${S_i \prec S_{j' - i' + i}}$. Because of $\llce{i', j'} > (i' - i)$ it holds $S[i..i'] = S[j' - i' + i..j'] (= \beta)$. Thus ${S_i \prec S_{j' - i' + i}}$ implies $S_{i'} \prec S_{j'}$, which contradicts the fact that $\nss{i'} = j'$.
Lastly, let $d = j' - i'$, $i'' = i + d$, and $j'' = j + d$ (as visualized in \cref{fig:llce:sup3}). Now we show that $\nss{i''} = j''$ (dashed edge in the figure). Because of $\alpha = S(\xhs{\leftarrow}..i'] = S(j' - \ell'..j']$ it holds $S[i..j) = S[i''..j'')$. From $\nss{i} = j$ and \cref{lemma:lyndonnss} follows that $S[i''..j'')$ is a Lyndon word, and thus $\nss{i''} \geq j''$. We have already shown that $i \geq i' - (j' - i')$, which implies $i'' \geq i'$. Due to $\nss{i'} = j'$ and $i'' \in [i', j')$ it follows from \cref{lemma:nonintersecting} that $\nss{i''} \leq j'$. Now assume $\nss{i''} \in (j'', j']$, then $S[i''..\nss{i''}) = S[i..j + (\nss{i''} - j''))$ is a Lyndon word, which contradicts the fact that $S[i..j)$ is the longest Lyndon word starting at position $i$. Thus, we have ruled out all possible values of $\nss{i''}$ except for $j''$.
\begin{figure}
\caption{Illustration of the proofs of the three properties in \cref{sec:llce}.}
\label{fig:llce:sup1}
\label{fig:llce:sup2}
\label{fig:llce:sup3}
\label{fig:llce:supplement}
\end{figure}
Now we show how to compute $\ell$. We keep using the definition of $i''$ and $j''$ from the previous paragraph. Furthermore, let $\ell'' = \llce{i'', j''}$. There are two possible cases.
\renewcommand{7.5mm}{7.5mm}
\begin{itembox}
\case{L1} \bm{(i'' - \ell'') > (j' - \ell')}\textbf{.}
$\stringstrut S = \stringbox[1mm]{}{}{\stringidx{\xhs{\leftarrow}}} \stringbox[7.5mm]{}{\alpha}{\stringidx{\ i'} \mathllap{ \mathllap{\stringidx{i}} \phantom{\stringbox[.2em]{}{\beta}{}\hspace{4mm}} \mathllap{\stringidx{j}}\hspace{3mm} }} \stringbox[5.5mm]{}{}{\stringidx{(j' - \ell')\ \ \ }} \stringbox[7.5mm]{}{\alpha}{\stringidx{\ \,j'} \mathllap{ \mathllap{\stringidx{\ i''}} \phantom{\stringbox[.2em]{}{\beta}{}\hspace{4mm}} \mathllap{\stringidx{j''}}\hspace{3mm} }} \stringbox[1mm]{}{}{}$
$\ell' = \absolute{\alpha}$, $\ell = \ell'' = \absolute{\beta}$
$\stringstrut \mathllap{ \stringbox[.2em]{}{\beta}{}\tikzmark{m2}\hspace{4mm} \stringbox[.2em]{}{\beta}{}\tikzmark{m3}\hspace{3mm} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {2,3} {
\path (m\x) ++(-3pt,-0pt) node (m\x) {};
}
\draw (m2) edge[out = 270, in = 270, -Latex] (m3); \end{tikzpicture}} \tikzmark{m1} \phantom{\stringbox[5.5mm]{}{}{}} \phantom{\stringbox[7.5mm]{}{\alpha}{}} \mathllap{ \stringbox[.2em]{}{\beta}{}\tikzmark{m5}\hspace{4mm} \stringbox[.2em]{}{\beta}{}\tikzmark{m6}\hspace{3mm} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {5,6} {
\path (m\x) ++(-3pt,-0pt) node (m\x) {};
}
\draw (m5) edge[out = 270, in = 270, -Latex] (m6); \end{tikzpicture}} \tikzmark{m4} \phantom{\stringbox[1mm]{}{}{}} \begin{tikzpicture}[overlay, remember picture]
\path (m1) ++(-3pt,12pt) node (m1) {};
\path (m4) ++(-3pt,-5pt) node (m4) {};
\draw[-Latex] (m1) to ++(0,-12pt) to[looseness = .7, out=270, in=270] (m4.center) to ++(0,14pt);
\end{tikzpicture}$
Due to $\ell'' = \llce{i'', j''}$, the prefixes $S[1..i'']$ and $S[1..j'']$ share the suffix $\beta = S(i'' - \ell''..i''] = S(j'' - \ell''..j'']$, and the first (from the right) mismatch between these prefixes is $S[i'' - \ell''] \neq S[j'' - \ell'']$.
Both the shared suffix and the mismatch are contained in $S(j' - \ell'..j']$ (i.e.\ in the right occurrence of $\alpha$). If we consider the substring $S(\xhs{\leftarrow}..i']$ instead (i.e.\ the left occurrence of $\alpha$), then $S[1..i]$ and $S[1..j]$ clearly also share the suffix $\beta = S(i - \ell''..i] = S(j - \ell''..j]$, with the first mismatch occurring at $S[i - \ell''] \neq S[j'' - \ell]$. Thus it holds $\ell = \ell''$. Due to $\nss{i''} = j''$ and our order of L-LCE computations, we have already computed $\ell''$. Therefore, we can simply assign $\ell \gets \ell''$ and continue without changing $i'$, $j'$, $\ell'$, and $\xhs{\leftarrow}$.
(Note that possibly $i'' \neq i' \land j'' = j'$. We provide a sketch in \cref{appendix}, \cref{fig:NL1}.)
\end{itembox}
\begin{itembox}
\case{L2} \bm{(i'' - \ell'') \leq (j' - \ell')}\textbf{.}
$\stringstrut S = \stringbox[1mm]{}{}{\stringidx{\xhs{\leftarrow}}} \stringbox[7.5mm]{}{\alpha}{\stringidx{\ i'} \mathllap{ \mathllap{\stringidx{i}} \phantom{\stringbox[.28em]{}{\beta}{}\hspace{5mm}} \mathllap{\stringidx{j}}\hspace{3mm} }} \stringbox[5.5mm]{}{}{\stringidx{(j' - \ell')\ \ \ }} \stringbox[7.5mm]{}{\alpha}{\stringidx{\ \,j'} \mathllap{ \mathllap{\stringidx{\ i''}} \phantom{\stringbox[.28em]{}{\beta}{}\hspace{5mm}} \mathllap{\stringidx{j''}}\hspace{3mm} }} \stringbox[1mm]{}{}{}$
$\ell' = \absolute{\alpha}$, $\ell'' = \absolute{\beta\gamma}$, $\ell \geq \absolute{\beta}$
$\stringstrut \mathllap{ \stringbox[.28em]{}{\beta}{}\tikzmark{m2}\hspace{5mm} \stringbox[.28em]{}{\beta}{}\tikzmark{m3}\hspace{3mm} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {2,3} {
\path (m\x) ++(-3pt,-0pt) node (m\x) {};
}
\draw (m2) edge[out = 270, in = 270, -Latex] (m3); \end{tikzpicture}} \tikzmark{m1} \phantom{\stringbox[5.5mm]{}{}{}} \phantom{\stringbox[7.5mm]{}{\alpha}{}} \mathllap{ \mathllap{\stringbox[0mm]{}{\gamma}{}} \stringbox[.28em]{}{\beta}{}\tikzmark{m5}\hspace{5mm} \mathllap{\stringbox[0mm]{}{\gamma}{}} \stringbox[.28em]{}{\beta}{}\tikzmark{m6}\hspace{3mm} \begin{tikzpicture}[overlay, remember picture]
\foreach \x in {5,6} {
\path (m\x) ++(-3pt,-0pt) node (m\x) {};
}
\draw (m5) edge[out = 270, in = 270, -Latex] (m6); \end{tikzpicture}} \tikzmark{m4} \phantom{\stringbox[1mm]{}{}{}} \begin{tikzpicture}[overlay, remember picture]
\path (m1) ++(-3pt,12pt) node (m1) {};
\path (m4) ++(-3pt,-5pt) node (m4) {};
\draw[-Latex] (m1) to ++(0,-12pt) to[looseness = .7, out=270, in=270] (m4.center) to ++(0,14pt);
\end{tikzpicture}$
This situation is similar to Case L1. There is a shared suffix $\beta = S(j' - \ell'..i''] = S(j'' - (i - \xhs{\leftarrow})..j'']$ between the prefixes $S[1..i'']$ and $S[1..j'']$.
They may share an even longer suffix $\gamma\beta$, but only the rightmost $\absolute{\beta} = i' - \xhs{\leftarrow}$ symbols of this suffix are contained in $S(j' - \ell'..j']$ (i.e.\ in the right occurrence of $\alpha$). If we consider the substring $S(\xhs{\leftarrow}..i']$ instead (i.e.\ the left occurrence of $\alpha$), then $S[1..i]$ and $S[1..j]$ clearly also share the suffix $\beta = S(\xhs{\leftarrow}..i] = S(j - (i - \xhs{\leftarrow})..j]$. Thus it holds $i - \xhs{\leftarrow} \leq \ell$, and we can skip the first $i - \xhs{\leftarrow}$ character comparisons by computing the LCE as $\ell = (i - \xhs{\leftarrow}) + \llce{\xhs{\leftarrow}, j + \xhs{\leftarrow} - i}$. We charge $\ell - (i - \xhs{\leftarrow})$ character comparisons to the previously uncharged interval $(i - \ell, \xhs{\leftarrow}]$, and continue with $i' \gets i$, $j' \gets j$, $\ell' \gets \ell$, and $\xhs{\leftarrow} \gets i - \ell$.
(Note that possibly $i'' \neq i' \land j'' = j'$ or even $i'' = i' \land j'' = j'$. We provide schematic drawings in \cref{appendix}, \cref{fig:NL2a,fig:NL2b}.)
\end{itembox}
We have shown how to compute $\ell$ without charging any index twice. It follows that the total number of character comparisons for all LCEs is $\orderof{n}$. For completeness, we outline a simple implementation of our approach in \cref{alg:llce}. Lines~\ref{alg:llce:case1}--\ref{alg:llce:copylce} correspond to Case L1. If $i \leq \xhs{\leftarrow}$, then lines~\ref{alg:llce:computelce1}--\ref{alg:llce:updatevars} compute the LCE naively. Otherwise, they correspond to Case L2.
\begin{algorithm} \newcommand{\naivellce}[1]{\textsc{naive-scan-}\llce{#1}} \newcommand{j_{\text{prev}}}{j_{\text{prev}}} \newcommand{k}{k} \newcommand{a}{a} \let\oldstrut\vphantom{\rhs}\oldstrut \renewcommand{\vphantom{\rhs}\oldstrut}{\vphantom{\xhs{\rightarrow}}\oldstrut} \begin{algorithmic}[1]
\Require{String $S$ of length $n$ and its NSS array \textsf{nss}.}
\Ensure{L\=/LCE value $\llce{i, \nss{i}}$ for each index $i \in [1, n]$ with $\nss{i} \neq n + 1$.}
\State $i' \gets 0;\ \ \xhs{\leftarrow} \gets n;\ \ d \gets 0$\label{alg:llce:init}
\For{$i \in [1, n]$ \textbf{with} $\nss{i} \neq n + 1$ in decreasing order}\label{alg:llce:loop}
\State $j \gets \nss{i}$
\If{$i \in (\xhs{\leftarrow}, i') \land i - \llce{i + d, j + d} > \xhs{\leftarrow}$}\label{alg:llce:case1}
\State $\llce{i, j} \gets \llce{i + d, j + d}$\label{alg:llce:copylce}
\Else
\State $\strutk \gets i - \min(\xhs{\leftarrow}, i)$ \label{alg:llce:computelce1}
\State $\vphantom{\rhs}\oldstrut\llce{i, j} \gets k + \naivellce{i - k, j - k}$ \label{alg:llce:computelce2}
\State $\vphantom{\rhs}\oldstrut\mathrlap{i'}\phantom{\xhs{\leftarrow}} \gets i;\ \ \xhs{\leftarrow} \gets i - \llce{i, j};\ \ \mathrlap{d}\phantom{\xhs{\rightarrow}} \gets j - i$ \label{alg:llce:updatevars}
\EndIf
\EndFor \end{algorithmic} \caption{Compute all L\=/LCEs.} \label{alg:llce} \end{algorithm}
\begin{lemma}\label{lemma:linearllce}
Given a string of length $n$ and its NSS array \textnormal{\textsf{nss}}, we can compute $\llce{i, \nss{i}}$ for all indices $i \in [1, n]$ with $\nss{i} \neq n + 1$ in $\orderof{n}$ time and space. \end{lemma}
\begin{corollary}
Given a string of length $n$ over a general ordered alphabet, we can find all runs in the string in $\orderof{n}$ time and space.
\begin{proof}
Computing the increasing runs takes $\orderof{n}$ time and space due to \cref{lemma:requiredlces,lemma:linearrlce,lemma:linearllce}. For decreasing runs, we only have to reverse the order of the alphabet and rerun the algorithm.
\end{proof} \end{corollary}
\section{Practical Implementation} \label{sec:practical}
We implemented our algorithm for the runs computation in C++17 and evaluated it by computing all runs on texts from the natural, real repetitive, and artificial repetitive text collections of the Pizza-Chili corpus\footnote{\url{http://pizzachili.dcc.uchile.cl/texts.html}, \url{http://pizzachili.dcc.uchile.cl/repcorpus.html}}. Additionally, we used the binary run-rich strings proposed by Matsubara et al. \cite{Matsubara2009} as input. \cref{table:results} shows the throughput that we achieve, i.e.\ the number of input bytes (or equivalently input symbols) that we process per second. On the string \texttt{tm29} we achieve the highest throughput of $15.6$ MiB/s. The lowest throughput of $8.8$ MiB/s occurs on the text \texttt{dna}. Generally, we perform better for run-rich strings.
\begin{table} \newcommand{\intxt}[2][]{\smash{\rotatebox{70}{\rlap{\shortstack{\bfseries\texttt{#2}\\{\scriptsize #1 MiB}}}}}} \caption{Throughput achieved by our runs algorithm using an AMD EPYC 7452 processor. We repeated each experiment five times and use the median throughput as the final result. All numbers are truncated to one decimal place.}
\begin{tabular}{c|c|cccccc|cc|ccc}
\intxt[$n$ in]{\textnormal{\bfseries Text}} & \intxt[1077]{\textbf{\boldmath$t_{49}$\unboldmath \cite{Matsubara2009}}} & \intxt[201]{sources} & \intxt[53]{pitches} & \intxt[1024]{proteins} & \intxt[385]{dna} & \intxt[1024]{english} & \intxt[282]{xml} & \intxt[107]{ecoli} & \intxt[439]{cere} & \intxt[255]{fib41} & \intxt[206]{rs.13} & \intxt[256]{tm29} \\ \hline runs/$100n$ & 94.4 & 4.7 & 11.7 & 7.0 & 25.3 & 2.4 & 3.4 & 24.4 & 23.6 & 76.3 & 92.7 & 83.3 \\ MiB/s & 15.0 & 11.4 & 11.0 & 10.9 & 8.8 & 10.5 & 12.8 & 9.0 & 9.2 & 15.4 & 15.1 & 15.6 \end{tabular} \label{table:results} \end{table}
Lastly, it is noteworthy that our new method of LCE computation leads to a remarkably simple implementation of the runs algorithm. In fact, the entire implementation \emph{including the computation of the NSS array} needs only 250 lines of code. We achieve this by interleaving the computation of the R-LCEs with the computation of the NSS array, which also improves the practical performance. For technical details we refer to the source code, which is publicly available on GitHub\footnote{\url{https://github.com/jonas-ellert/linear-time-runs/}}.
\section{Conclusion and Open Questions} \label{sec:conclusion}
We have shown the first linear time algorithm for computing all runs on a general ordered alphabet. The algorithm is also very fast in practice and remarkably easy to implement. It is an open question whether our techniques could be used for the computation of runs on tries, where the best known algorithms require superlinear time even for linearly-sortable alphabets (see e.g. \cite{Sugahara2019}).
\appendix
\section{Supplementary Material} \label{appendix}
\begin{figure}
\caption{Additional drawings for \cref{sec:llce}, Cases L1 and L2.}
\label{fig:NL1}
\label{fig:NL2a}
\label{fig:NL2b}
\end{figure}
\end{document} | arXiv |
Corporate Finance & Accounting
Corporate Finance & Accounting Accounting
Cost-Volume-Profit – CVP Analysis Definition
By Will Kenton
What Is Cost-Volume-Profit – CVP Analysis?
Cost-volume-profit (CVP) analysis is a method of cost accounting that looks at the impact that varying levels of costs and volume have on operating profit. The cost-volume-profit analysis, also commonly known as break-even analysis, looks to determine the break-even point for different sales volumes and cost structures, which can be useful for managers making short-term economic decisions.
The cost-volume-profit analysis makes several assumptions, including that the sales price, fixed costs, and variable cost per unit are constant. Running this analysis involves using several equations for price, cost and other variables, then plotting them out on an economic graph.
Cost-Volume Profit Analysis
Cost-Volume-Profit Analysis Formula Is
The CVP formula can be used to calculate the sales volume needed to cover costs and break even, in the CVP breakeven sales volume formula, as follows:
Breakeven Sales Volume=FCCMwhere:FC=Fixed costsCM=Contribution margin=Sales−Variable Costs\begin{aligned} &\text{Breakeven Sales Volume}=\frac{FC}{CM} \\ &\textbf{where:}\\ &FC=\text{Fixed costs}\\ &CM=\text{Contribution margin} = \text{Sales} - \text{Variable Costs}\\ \end{aligned}Breakeven Sales Volume=CMFCwhere:FC=Fixed costsCM=Contribution margin=Sales−Variable Costs
To use the above formula to find a company's target sales volume, simply add a target profit amount per unit to the fixed-cost component of the formula. This allows you to solve for the target volume based on the assumptions used in the model.
What Does Cost-Volume-Profit Analysis Tell You?
The contribution margin is used in the determination of the break-even point of sales. By dividing the total fixed costs by the contribution margin ratio, the break-even point of sales in terms of total dollars may be calculated. For example, a company with $100,000 of fixed costs and a contribution margin of 40% must earn revenue of $250,000 to break even.
Profit may be added to the fixed costs to perform CVP analysis on a desired outcome. For example, if the previous company desired an accounting profit of $50,000, the total sales revenue is found by dividing $150,000 (the sum of fixed costs and desired profit) by the contribution margin of 40%. This example yields a required sales revenue of $375,000.
CVP analysis is only reliable if costs are fixed within a specified production level. All units produced are assumed to be sold, and all fixed costs must be stable in a CVP analysis. Another assumption is all changes in expenses occur because of changes in activity level. Semi-variable expenses must be split between expense classifications using the high-low method, scatter plot or statistical regression.
Cost-volume-price analysis is a way to find out how changes in variable and fixed costs affect a firm's profit.
Companies can use the formula result to see how many units they need to sell to break even (cover all costs) or reach a certain minimum profit margin.
Contribution Margin and Contribution Margin Ratio
CVP analysis also manages product contribution margin. Contribution margin is the difference between total sales and total variable costs. For a business to be profitable, the contribution margin must exceed total fixed costs. The contribution margin may also be calculated per unit. The unit contribution margin is simply the remainder after the unit variable cost is subtracted from the unit sales price. The contribution margin ratio is determined by dividing the contribution margin by total sales.
Break-Even Analysis
Break-even analysis calculates a margin of safety where an asset price, or a firm's revenues, can fall and still stay above the break-even point.
Cost Accounting Definition
Cost accounting is a form of managerial accounting that aims to capture a company's total cost of production by assessing its variable and fixed costs.
Breakeven Point (BEP)
In accounting, the breakeven point is the production level at which total revenues equal total expenses. Businesses also have a breakeven point, when they aren't making or losing money.
Managerial Accounting Definition
Managerial accounting is the practice of analyzing and communicating financial data to managers, who use the information to make business decisions.
How Operating Leverage Works
Operating leverage shows how a company's costs and profit relate to each other and changes can affect profits without impacting sales, contribution margin, or selling price.
Understanding Variable Cost
A variable cost is a corporate expense that changes in proportion to production output. Variable costs increase or decrease depending on a company's production volume; they rise as production increases and fall as production decreases.
Understanding Contribution Margins
How Operating Leverage Can Impact a Business
How can I calculate break-even analysis in Excel?
Regression Basics for Business Analysis
Reverse Engineering Return On Equity | CommonCrawl |
Closed graph theorem (functional analysis)
In mathematics, particularly in functional analysis and topology, the closed graph theorem is a result connecting the continuity of certain kinds of functions to a topological property of their graph. In its most elementary form, the closed graph theorem states that a linear function between two Banach spaces is continuous if and only if the graph of that function is closed.
This article is about closed graph theorems in functional analysis. For other results with the same name, see Closed graph theorem.
The closed graph theorem has extensive application throughout functional analysis, because it can control whether a partially-defined linear operator admits continuous extensions. For this reason, it has been generalized to many circumstances beyond the elementary formulation above.
Preliminaries
The closed graph theorem is a result about linear map $f:X\to Y$ between two vector spaces endowed with topologies making them into topological vector spaces (TVSs). We will henceforth assume that $X$ and $Y$ are topological vector spaces, such as Banach spaces for example, and that Cartesian products, such as $X\times Y,$ are endowed with the product topology. The graph of this function is the subset
$\operatorname {graph} {\!(f)}=\{(x,f(x)):x\in \operatorname {dom} f\},$
of $\operatorname {dom} (f)\times Y=X\times Y,$ where $\operatorname {dom} f=X$ denotes the function's domain. The map $f:X\to Y$ is said to have a closed graph (in $X\times Y$) if its graph $\operatorname {graph} f$ is a closed subset of product space $X\times Y$ (with the usual product topology). Similarly, $f$ is said to have a sequentially closed graph if $\operatorname {graph} f$ is a sequentially closed subset of $X\times Y.$
A closed linear operator is a linear map whose graph is closed (it need not be continuous or bounded). It is common in functional analysis to call such maps "closed", but this should not be confused the non-equivalent notion of a "closed map" that appears in general topology.
Partial functions
It is common in functional analysis to consider partial functions, which are functions defined on a dense subset of some space $X.$ A partial function $f$ is declared with the notation $f:D\subseteq X\to Y,$ which indicates that $f$ has prototype $f:D\to Y$ (that is, its domain is $D$ and its codomain is $Y$) and that $\operatorname {dom} f=D$ is a dense subset of $X.$ Since the domain is denoted by $\operatorname {dom} f,$ it is not always necessary to assign a symbol (such as $D$) to a partial function's domain, in which case the notation $f:X\rightarrowtail Y$ or $f:X\rightharpoonup Y$ may be used to indicate that $f$ is a partial function with codomain $Y$ whose domain $\operatorname {dom} f$ is a dense subset of $X.$[1] A densely defined linear operator between vector spaces is a partial function $f:D\subseteq X\to Y$ whose domain $D$ is a dense vector subspace of a TVS $X$ such that $f:D\to Y$ is a linear map. A prototypical example of a partial function is the derivative operator, which is only defined on the space $D:=C^{1}([0,1])$ of once continuously differentiable functions, a dense subset of the space $X:=C([0,1])$ of continuous functions.
Every partial function is, in particular, a function and so all terminology for functions can be applied to them. For instance, the graph of a partial function $f$ is (as before) the set $ \operatorname {graph} {\!(f)}=\{(x,f(x)):x\in \operatorname {dom} f\}.$ However, one exception to this is the definition of "closed graph". A partial function $f:D\subseteq X\to Y$ is said to have a closed graph (respectively, a sequentially closed graph) if $\operatorname {graph} f$ is a closed (respectively, sequentially closed) subset of $X\times Y$ in the product topology; importantly, note that the product space is $X\times Y$ and not $D\times Y=\operatorname {dom} f\times Y$ as it was defined above for ordinary functions.[note 1]
Closable maps and closures
A linear operator $f:D\subseteq X\to Y$ is closable in $X\times Y$ if there exists a vector subspace $E\subseteq X$ containing $D$ and a function (resp. multifunction) $F:E\to Y$ whose graph is equal to the closure of the set $\operatorname {graph} f$ in $X\times Y.$ Such an $F$ is called a closure of $f$ in $X\times Y$, is denoted by ${\overline {f}},$ and necessarily extends $f.$
If $f:D\subseteq X\to Y$ is a closable linear operator then a core or an essential domain of $f$ is a subset $C\subseteq D$ such that the closure in $X\times Y$ of the graph of the restriction $f{\big \vert }_{C}:C\to Y$ of $f$ to $C$ is equal to the closure of the graph of $f$ in $X\times Y$ (i.e. the closure of $\operatorname {graph} f$ in $X\times Y$ is equal to the closure of $\operatorname {graph} f{\big \vert }_{C}$ in $X\times Y$).
Characterizations of closed graphs (general topology)
Throughout, let $X$ and $Y$ be topological spaces and $X\times Y$ is endowed with the product topology.
Function with a closed graph
Main article: Closed graph property
If $f:X\to Y$ is a function then it is said to have a closed graph if it satisfies any of the following are equivalent conditions:
1. (Definition): The graph $\operatorname {graph} f$ of $f$ is a closed subset of $X\times Y.$
2. For every $x\in X$ and net $x_{\bullet }=\left(x_{i}\right)_{i\in I}$ in $X$ such that $x_{\bullet }\to x$ in $X,$ if $y\in Y$ is such that the net $f\left(x_{\bullet }\right)=\left(f\left(x_{i}\right)\right)_{i\in I}\to y$ in $Y$ then $y=f(x).$[2]
• Compare this to the definition of continuity in terms of nets, which recall is the following: for every $x\in X$ and net $x_{\bullet }=\left(x_{i}\right)_{i\in I}$ in $X$ such that $x_{\bullet }\to x$ in $X,$ $f\left(x_{\bullet }\right)\to f(x)$ in $Y.$
• Thus to show that the function $f$ has a closed graph, it may be assumed that $f\left(x_{\bullet }\right)$ converges in $Y$ to some $y\in Y$ (and then show that $y=f(x)$) while to show that $f$ is continuous, it may not be assumed that $f\left(x_{\bullet }\right)$ converges in $Y$ to some $y\in Y$ and instead, it must be proven that this is true (and moreover, it must more specifically be proven that $f\left(x_{\bullet }\right)$ converges to $f(x)$ in $Y$).
and if $Y$ is a Hausdorff compact space then we may add to this list:
1. $f$ is continuous.[3]
and if both $X$ and $Y$ are first-countable spaces then we may add to this list:
1. $f$ has a sequentially closed graph in $X\times Y.$
Function with a sequentially closed graph
If $f:X\to Y$ is a function then the following are equivalent:
1. $f$ has a sequentially closed graph in $X\times Y.$
2. Definition: the graph of $f$ is a sequentially closed subset of $X\times Y.$
3. For every $x\in X$ and sequence $x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }$ in $X$ such that $x_{\bullet }\to x$ in $X,$ if $y\in Y$ is such that the net $f\left(x_{\bullet }\right):=\left(f\left(x_{i}\right)\right)_{i=1}^{\infty }\to y$ in $Y$ then $y=f(x).$[2]
Basic properties of maps with closed graphs
Suppose $f:D(f)\subseteq X\to Y$ is a linear operator between Banach spaces.
• If $A$ is closed then $A-s\operatorname {Id} _{D(f)}$ is closed where $s$ is a scalar and $\operatorname {Id} _{D(f)}$ is the identity function.
• If $f$ is closed, then its kernel (or nullspace) is a closed vector subspace of $X.$
• If $f$ is closed and injective then its inverse $f^{-1}$ is also closed.
• A linear operator $f$ admits a closure if and only if for every $x\in X$ and every pair of sequences $x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }$ and $z_{\bullet }=\left(z_{i}\right)_{i=1}^{\infty }$ in $D(f)$ both converging to $x$ in $X,$ such that both $f\left(x_{\bullet }\right)=\left(f\left(x_{i}\right)\right)_{i=1}^{\infty }$ and $f\left(z_{\bullet }\right)=\left(f\left(z_{i}\right)\right)_{i=1}^{\infty }$ converge in $Y,$ one has $\lim _{i\to \infty }f\left(x_{i}\right)=\lim _{i\to \infty }f\left(z_{i}\right).$
Examples and counterexamples
Continuous but not closed maps
• Let $X$ denote the real numbers $\mathbb {R} $ with the usual Euclidean topology and let $Y$ denote $\mathbb {R} $ with the indiscrete topology (where $Y$ is not Hausdorff and that every function valued in $Y$ is continuous). Let $f:X\to Y$ be defined by $f(0)=1$ and $f(x)=0$ for all $x\neq 0.$ Then $f:X\to Y$ is continuous but its graph is not closed in $X\times Y.$[2]
• If $X$ is any space then the identity map $\operatorname {Id} :X\to X$ is continuous but its graph, which is the diagonal $\operatorname {graph} \operatorname {Id} =\{(x,x):x\in X\},$ is closed in $X\times X$ if and only if $X$ is Hausdorff.[4] In particular, if $X$ is not Hausdorff then $\operatorname {Id} :X\to X$ is continuous but not closed.
• If $f:X\to Y$ is a continuous map whose graph is not closed then $Y$ is not a Hausdorff space.
Closed but not continuous maps
• If $(X,\tau )$ is a Hausdorff TVS and $\nu $ is a vector topology on $X$ that is strictly finer than $\tau ,$ then the identity map $\operatorname {Id} :(X,\tau )\to (X,\nu )$ :(X,\tau )\to (X,\nu )} a closed discontinuous linear operator.[5]
• Consider the derivative operator $A={\frac {d}{dx}}$ where $X=Y=C([a,b]).$is the Banach space of all continuous functions on an interval $[a,b].$ If one takes its domain $D(f)$ to be $C^{1}([a,b]),$ then $f$ is a closed operator, which is not bounded.[6] On the other hand, if $D(f)$ is the space $C^{\infty }([a,b])$ of smooth functions scalar valued functions then $f$ will no longer be closed, but it will be closable, with the closure being its extension defined on $C^{1}([a,b]).$
• Let $X$ and $Y$ both denote the real numbers $\mathbb {R} $ with the usual Euclidean topology. Let $f:X\to Y$ be defined by $f(0)=0$ and $f(x)={\frac {1}{x}}$ for all $x\neq 0.$ Then $f:X\to Y$ has a closed graph (and a sequentially closed graph) in $X\times Y=\mathbb {R} ^{2}$ but it is not continuous (since it has a discontinuity at $x=0$).[2]
• Let $X$ denote the real numbers $\mathbb {R} $ with the usual Euclidean topology, let $Y$ denote $\mathbb {R} $ with the discrete topology, and let $\operatorname {Id} :X\to Y$ be the identity map (i.e. $\operatorname {Id} (x):=x$ for every $x\in X$). Then $\operatorname {Id} :X\to Y$ is a linear map whose graph is closed in $X\times Y$ but it is clearly not continuous (since singleton sets are open in $Y$ but not in $X$).[2]
Closed graph theorems
Between Banach spaces
Closed Graph Theorem for Banach spaces — If $T:X\to Y$ is an everywhere-defined linear operator between Banach spaces, then the following are equivalent:
1. $T$ is continuous.
2. $T$ is closed (that is, the graph of $T$ is closed in the product topology on $X\times Y).$
3. If $x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }\to x$ in $X$ then $T\left(x_{\bullet }\right):=\left(T\left(x_{i}\right)\right)_{i=1}^{\infty }\to T(x)$ in $Y.$
4. If $x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }\to 0$ in $X$ then $T\left(x_{\bullet }\right)\to 0$ in $Y.$
5. If $x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }\to x$ in $X$ and if $T\left(x_{\bullet }\right)$ converges in $Y$ to some $y\in Y,$ then $y=T(x).$
6. If $x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }\to 0$ in $X$ and if $T\left(x_{\bullet }\right)$ converges in $Y$ to some $y\in Y,$ then $y=0.$
The operator is required to be everywhere-defined, that is, the domain $D(T)$ of $T$ is $X.$ This condition is necessary, as there exist closed linear operators that are unbounded (not continuous); a prototypical example is provided by the derivative operator on $C([0,1]),$ whose domain is a strict subset of $C([0,1]).$
The usual proof of the closed graph theorem employs the open mapping theorem. In fact, the closed graph theorem, the open mapping theorem and the bounded inverse theorem are all equivalent. This equivalence also serves to demonstrate the importance of $X$ and $Y$ being Banach; one can construct linear maps that have unbounded inverses in this setting, for example, by using either continuous functions with compact support or by using sequences with finitely many non-zero terms along with the supremum norm.
Complete metrizable codomain
The closed graph theorem can be generalized from Banach spaces to more abstract topological vector spaces in the following ways.
Theorem — A linear operator from a barrelled space $X$ to a Fréchet space $Y$ is continuous if and only if its graph is closed.
Between F-spaces
There are versions that does not require $Y$ to be locally convex.
Theorem — A linear map between two F-spaces is continuous if and only if its graph is closed.[7][8]
This theorem is restated and extend it with some conditions that can be used to determine if a graph is closed:
Theorem — If $T:X\to Y$ is a linear map between two F-spaces, then the following are equivalent:
1. $T$ is continuous.
2. $T$ has a closed graph.
3. If $x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }\to x$ in $X$ and if $T\left(x_{\bullet }\right):=\left(T\left(x_{i}\right)\right)_{i=1}^{\infty }$ converges in $Y$ to some $y\in Y,$ then $y=T(x).$[9]
4. If $x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }\to 0$ in $X$ and if $T\left(x_{\bullet }\right)$ converges in $Y$ to some $y\in Y,$ then $y=0.$
Complete pseudometrizable codomain
Every metrizable topological space is pseudometrizable. A pseudometrizable space is metrizable if and only if it is Hausdorff.
Closed Graph Theorem[10] — Also, a closed linear map from a locally convex ultrabarrelled space into a complete pseudometrizable TVS is continuous.
Closed Graph Theorem — A closed and bounded linear map from a locally convex infrabarreled space into a complete pseudometrizable locally convex space is continuous.[10]
Codomain not complete or (pseudo) metrizable
Theorem[11] — Suppose that $T:X\to Y$ is a linear map whose graph is closed. If $X$ is an inductive limit of Baire TVSs and $Y$ is a webbed space then $T$ is continuous.
Closed Graph Theorem[10] — A closed surjective linear map from a complete pseudometrizable TVS onto a locally convex ultrabarrelled space is continuous.
An even more general version of the closed graph theorem is
Theorem[12] — Suppose that $X$ and $Y$ are two topological vector spaces (they need not be Hausdorff or locally convex) with the following property:
If $G$ is any closed subspace of $X\times Y$ and $u$ is any continuous map of $G$ onto $X,$ then $u$ is an open mapping.
Under this condition, if $T:X\to Y$ is a linear map whose graph is closed then $T$ is continuous.
Borel graph theorem
Main article: Borel Graph Theorem
The Borel graph theorem, proved by L. Schwartz, shows that the closed graph theorem is valid for linear maps defined on and valued in most spaces encountered in analysis.[13] Recall that a topological space is called a Polish space if it is a separable complete metrizable space and that a Souslin space is the continuous image of a Polish space. The weak dual of a separable Fréchet space and the strong dual of a separable Fréchet-Montel space are Souslin spaces. Also, the space of distributions and all Lp-spaces over open subsets of Euclidean space as well as many other spaces that occur in analysis are Souslin spaces. The Borel graph theorem states:
Borel Graph Theorem — Let $u:X\to Y$ be linear map between two locally convex Hausdorff spaces $X$ and $Y.$ If $X$ is the inductive limit of an arbitrary family of Banach spaces, if $Y$ is a Souslin space, and if the graph of $u$ is a Borel set in $X\times Y,$ then $u$ is continuous.[13]
An improvement upon this theorem, proved by A. Martineau, uses K-analytic spaces.
A topological space $X$ is called a $K_{\sigma \delta }$ if it is the countable intersection of countable unions of compact sets.
A Hausdorff topological space $Y$ is called K-analytic if it is the continuous image of a $K_{\sigma \delta }$ space (that is, if there is a $K_{\sigma \delta }$ space $X$ and a continuous map of $X$ onto $Y$).
Every compact set is K-analytic so that there are non-separable K-analytic spaces. Also, every Polish, Souslin, and reflexive Fréchet space is K-analytic as is the weak dual of a Frechet space. The generalized Borel graph theorem states:
Generalized Borel Graph Theorem[14] — Let $u:X\to Y$ be a linear map between two locally convex Hausdorff spaces $X$ and $Y.$ If $X$ is the inductive limit of an arbitrary family of Banach spaces, if $Y$ is a K-analytic space, and if the graph of $u$ is closed in $X\times Y,$ then $u$ is continuous.
Related results
If $F:X\to Y$ is closed linear operator from a Hausdorff locally convex TVS $X$ into a Hausdorff finite-dimensional TVS $Y$ then $F$ is continuous.[15]
See also
• Almost open linear map – Map that satisfies a condition similar to that of being an open map.Pages displaying short descriptions of redirect targets
• Barrelled space – Type of topological vector space
• Closed graph – Graph of a map closed in the product spacePages displaying short descriptions of redirect targets
• Closed linear operator – Graph of a map closed in the product spacePages displaying short descriptions of redirect targets
• Densely defined operator – Function that is defined almost everywhere (mathematics)
• Discontinuous linear map
• Kakutani fixed-point theorem – On when a function f: S→Pow(S) on a compact nonempty convex subset S⊂ℝⁿ has a fixed point
• Open mapping theorem (functional analysis) – Condition for a linear operator to be open
• Ursescu theorem – Generalization of closed graph, open mapping, and uniform boundedness theorem
• Webbed space – Space where open mapping and closed graph theorems hold
References
Notes
1. In contrast, when $f:D\to Y$ is considered as an ordinary function (rather than as the partial function $f:D\subseteq X\to Y$), then "having a closed graph" would instead mean that $\operatorname {graph} f$ is a closed subset of $D\times Y.$ If $\operatorname {graph} f$ is a closed subset of $X\times Y$ then it is also a closed subset of $\operatorname {dom} (f)\times Y$ although the converse is not guaranteed in general.
1. Dolecki & Mynard 2016, pp. 4–5.
2. Narici & Beckenstein 2011, pp. 459–483.
3. Munkres 2000, p. 171.
4. Rudin 1991, p. 50.
5. Narici & Beckenstein 2011, p. 480.
6. Kreyszig, Erwin (1978). Introductory Functional Analysis With Applications. USA: John Wiley & Sons. Inc. p. 294. ISBN 0-471-50731-8.
7. Schaefer & Wolff 1999, p. 78.
8. Trèves (2006), p. 173
9. Rudin 1991, pp. 50–52.
10. Narici & Beckenstein 2011, pp. 474–476.
11. Narici & Beckenstein 2011, p. 479-483.
12. Trèves 2006, p. 169.
13. Trèves 2006, p. 549.
14. Trèves 2006, pp. 557–558.
15. Narici & Beckenstein 2011, p. 476.
Bibliography
• Adasch, Norbert; Ernst, Bruno; Keim, Dieter (1978). Topological Vector Spaces: The Theory Without Convexity Conditions. Lecture Notes in Mathematics. Vol. 639. Berlin New York: Springer-Verlag. ISBN 978-3-540-08662-8. OCLC 297140003.
• Banach, Stefan (1932). Théorie des Opérations Linéaires [Theory of Linear Operations] (PDF). Monografie Matematyczne (in French). Vol. 1. Warszawa: Subwencji Funduszu Kultury Narodowej. Zbl 0005.20901. Archived from the original (PDF) on 2014-01-11. Retrieved 2020-07-11.
• Berberian, Sterling K. (1974). Lectures in Functional Analysis and Operator Theory. Graduate Texts in Mathematics. Vol. 15. New York: Springer. ISBN 978-0-387-90081-0. OCLC 878109401.
• Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190.
• Conway, John (1990). A course in functional analysis. Graduate Texts in Mathematics. Vol. 96 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-97245-9. OCLC 21195908.
• Edwards, Robert E. (1995). Functional Analysis: Theory and Applications. New York: Dover Publications. ISBN 978-0-486-68143-6. OCLC 30593138.
• Dolecki, Szymon; Mynard, Frederic (2016). Convergence Foundations Of Topology. New Jersey: World Scientific Publishing Company. ISBN 978-981-4571-52-4. OCLC 945169917.
• Dubinsky, Ed (1979). The Structure of Nuclear Fréchet Spaces. Lecture Notes in Mathematics. Vol. 720. Berlin New York: Springer-Verlag. ISBN 978-3-540-09504-0. OCLC 5126156.
• Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098.
• Husain, Taqdir; Khaleelulla, S. M. (1978). Barrelledness in Topological and Ordered Vector Spaces. Lecture Notes in Mathematics. Vol. 692. Berlin, New York, Heidelberg: Springer-Verlag. ISBN 978-3-540-09096-0. OCLC 4493665.
• Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342.
• Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704.
• Kriegl, Andreas; Michor, Peter W. (1997). The Convenient Setting of Global Analysis (PDF). Mathematical Surveys and Monographs. Vol. 53. Providence, R.I: American Mathematical Society. ISBN 978-0-8218-0780-4. OCLC 37141279.
• Munkres, James R. (2000). Topology (Second ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Robertson, Alex P.; Robertson, Wendy J. (1980). Topological Vector Spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge England: Cambridge University Press. ISBN 978-0-521-29882-7. OCLC 589250.
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
• Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN 978-0-8247-8643-4. OCLC 24909067.
• Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
• Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
• "Proof of closed graph theorem". PlanetMath.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Topological vector spaces (TVSs)
Basic concepts
• Banach space
• Completeness
• Continuous linear operator
• Linear functional
• Fréchet space
• Linear map
• Locally convex space
• Metrizability
• Operator topologies
• Topological vector space
• Vector space
Main results
• Anderson–Kadec
• Banach–Alaoglu
• Closed graph theorem
• F. Riesz's
• Hahn–Banach (hyperplane separation
• Vector-valued Hahn–Banach)
• Open mapping (Banach–Schauder)
• Bounded inverse
• Uniform boundedness (Banach–Steinhaus)
Maps
• Bilinear operator
• form
• Linear map
• Almost open
• Bounded
• Continuous
• Closed
• Compact
• Densely defined
• Discontinuous
• Topological homomorphism
• Functional
• Linear
• Bilinear
• Sesquilinear
• Norm
• Seminorm
• Sublinear function
• Transpose
Types of sets
• Absolutely convex/disk
• Absorbing/Radial
• Affine
• Balanced/Circled
• Banach disks
• Bounding points
• Bounded
• Complemented subspace
• Convex
• Convex cone (subset)
• Linear cone (subset)
• Extreme point
• Pre-compact/Totally bounded
• Prevalent/Shy
• Radial
• Radially convex/Star-shaped
• Symmetric
Set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Convex hull
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Types of TVSs
• Asplund
• B-complete/Ptak
• Banach
• (Countably) Barrelled
• BK-space
• (Ultra-) Bornological
• Brauner
• Complete
• Convenient
• (DF)-space
• Distinguished
• F-space
• FK-AK space
• FK-space
• Fréchet
• tame Fréchet
• Grothendieck
• Hilbert
• Infrabarreled
• Interpolation space
• K-space
• LB-space
• LF-space
• Locally convex space
• Mackey
• (Pseudo)Metrizable
• Montel
• Quasibarrelled
• Quasi-complete
• Quasinormed
• (Polynomially
• Semi-) Reflexive
• Riesz
• Schwartz
• Semi-complete
• Smith
• Stereotype
• (B
• Strictly
• Uniformly) convex
• (Quasi-) Ultrabarrelled
• Uniformly smooth
• Webbed
• With the approximation property
• Mathematics portal
• Category
• Commons
| Wikipedia |
Quiz 12: Appendix
Find the Critical Value at The α=0.05\alpha = 0.05α=0.05 Level for the Following Sample, for Testing
Find the critical value at the α=0.05\alpha = 0.05α=0.05 level for the following sample, for testing H0H _ { 0 }H0 m= 41 versus H1:m≠41H _ { 1 } : m \neq 41H1:m=41 .
576155503552582241483227\begin{array} { l l l l l l l l l l l l } 57 & 61 & 55 & 50 & 35 & 52 & 58 & 22 & 41 & 48 & 32 & 27\end{array}576155503552582241483227
A wild life biologist believes that the median length of the fish in a lake is 3 5 cm. A random sample of 14 fish yields the following lengths. 20 20 22 24 24 26 27 29 30 32 35 37 41 43 Test the biologist's hypothesis at α=0.05\alpha = 0.05α=0.05 . A) Reject the claim because the test value 4 is more than the critical value 3 . B) Reject the claim because the test value 4 is more than the critical value 2 . C) Do not reject the claim because the test value 3 is equal to the critical value 3 . D) Do not reject the claim because the test value 3 is more than the critical value 2 .
Monthly rents were recorded for a sample of 36 apartments in a certain city. The results were as follows. 1180 1240 1170 1280 1040 980 1420 1150 1060 990 1010 1260 1350 1130 1370 1130 990 1080 1220 1420 1130 1050 1130 1090 1030 1260 1030 1170 1050 1320 1450 1000 1090 1450 1120 1120 Can you conclude that the median rent is less than $1200\$ 1200$1200 per month? Use the α=1\alpha = 1α=1 level of significance. a. State appropriate null and alternate hypotheses. b. Compute the test statistic. c. Find the critical value. d. State a conclusion. were as follows.
The owners of a coffee stand hypothesize that the median number of sales during the hour from 10:00 AM to 11:00 AM is 2 0. They tabulated the following random sample of the number of sales during the time period. 18 24 21 17 22 23 24 22 19 22 21 23 21 16 21 Use the α=0.05\alpha = 0.05α=0.05 level of significance and provide the following to provide the fol information: a. State appropriate null and alternate hypotheses. b. Compute the test statistic. c. Find the critical value. d. State a conclusion.
A sample of 10 students took a class online and 12 students took an equivalent class in a traditional classroom. Both classes were given the same final exam three weeks after the end of the courses. The scores were as follows. Online 70 87 64 95 67 81 64 74 86 80 Traditional 79 75 89 91 96 76 98 71 91 65 93 83 Can you conclude that the median score for the online class is less than for the trar class? Use the α=0.05\alpha = 0.05α=0.05 level of significance. a)State the null and alternate hypotheses. b)Compute the value of the test statistic. c)Compute the P-value. d)State a conclusion
Fill in the blank with the appropriate word or phrase. The null hypothesis for the rank-sum test is that the two population ? are equal A)rank-sums B)modes C)means D)medians
Given n1=18,n2=22, S=311, and H1:m1≠m2, n _ { 1 } = 18 , n _ { 2 } = 22 , \mathrm {~S} = 311 , \text { and } H _ { 1 } : m _ { 1 } \neq m _ { 2 } \text {, }n1=18,n2=22, S=311, and H1:m1=m2, find the P -value. A) 0.0571 B) 0.0286 C) 0.1142 D) 0.9429
The sign test is performed to test H0:m=32 versus H1:m<32H _ { 0 } : m = 32 \text { versus } H _ { 1 } : m < 32H0:m=32 versus H1:m<32 There are 17 positive signs and 21 negative signs in a test involving 38 samples. What is the value of the test statistic? A) -0.65 B) 0.65 C) -0.49 D) 0.81
Heights, in feet, of a sample of 24 mature oak trees in a forest were measured. The results were as follows. 61 62 65 52 63 65 64 60 68 61 56 61 56 63 60 63 69 64 66 68 67 65 60 66 Can you conclude that the median height of oak trees in this forest is greater than 6 0 feet Use the α=0.05\alpha = 0.05α=0.05 level of significance. a. State appropriate null and alternate hypotheses. b. Compute the test statistic. c. Find the critical value. d. State a conclusion. feet
The following data was collected as part of a study examining whether there is a difference between the number of hours men and women watch television. The values represent the number of hours a subject watched television on a designated Tuesday night. In the process of computing the test value the data from both samples should be combined, arranged in order, and ranked according to each group. Calculate the sum of the ranks for both groups. Lower values rank ahead of higher ones Men 2.0 1.5 3.0 2.5 2.0 1.0 0.0 2.0 1.5 2.5 2.0 2.0 Women 2.0 2.5 1.0 1.0 1.5 2.5 2.0 1.0 2.0 1.5 1.0 0.0 A) The sum of the ranks for the men is 162.5162.5162.5 , and the sum of the ranks for the woman is 137.5. B) The sum of the ranks for the men is 137.5137.5137.5 , and the sum of the ranks for the woman is 162.5162.5162.5 . C) The sum of the ranks for the men is 170 , and the sum of the ranks for the woman is 130 . D) The sum of the ranks for the men is 130 , and the sum of the ranks for the woman is 170 .
Given n1=14,n2=22, S=205, and H1:m1<m2n _ { 1 } = 14 , n _ { 2 } = 22 , \mathrm {~S} = 205 , \text { and } H _ { 1 } : m _ { 1 } < m _ { 2 }n1=14,n2=22, S=205, and H1:m1<m2 find the P -value. A) 0.9599 B) 0.0802 C) 0.0201 D) 0.0401 | CommonCrawl |
Acid and Chemical Induced Conformational Changes of Ervatamin B. Presence of Partially Structured Multiple Intermediates
Sundd, Monica;Kundu, Suman;Jagannadham, Medicherla V. 143
The structural and functional aspects of ervatamin B were studied in solution. Ervatamin B belongs to the $\alpha+\beta$ class of proteins. The intrinsic fluorescence emission maximum of the enzyme was at 350 nm under neutral conditions, and at 355 nm under denaturing conditions. Between pH 1.0-2.5 the enzyme exists in a partially unfolded state with minimum or no tertiary structure, and no proteolytic activity. At still lower pH, the enzyme regains substantial secondary structure, which is predominantly $\beta$-sheet conformation and shows a strong binding to 8-anilino-1-napthalene-sulfonic acid (ANS). In the presence of salt, the enzyme attains a similar state directly from the native state. Under neutral conditions, the enzyme was stable in urea, while the guanidine hydrochloride (GuHCl) induced equilibrium unfolding was cooperative. The GuHCl induced unfolding transition curves at pH 3.0 and 4.0 were non-coincidental, indicating the presence of intermediates in the unfolding pathway. This was substantiated by strong ANS binding that was observed at low concentrations of GuHCl at both pH 3.0 and 4.0. The urea induced transition curves at pH 3.0 were, however, coincidental, but non-cooperative. This indicates that the different structural units of the enzyme unfold in steps through intermediates. This observation is further supported by two emission maxima in ANS binding assay during urea denaturation. Hence, denaturant induced equilibrium unfolding pathway of ervatamin B, which differs from the acid induced unfolding pathway, is not a simple two-state transition but involves intermediates which probably accumulate at different stages of protein folding and hence adds a new dimension to the unfolding pathway of plant proteases of the papain superfamily.
Alcohol and Temperature Induced Conformational Transitions in Ervatamin B: Sequential Unfolding of Domains
Kundu, Suman;Sundd, Monica;Jagannadham, Medicherla V. 155
The structural aspects of ervatamin B have been studied in different types of alcohol. This alcohol did not affect the structure or activity of ervatamin B under neutral conditions. At a low pH (3.0), different kinds of alcohol have different effects. Interestingly, at a certain concentration of non-fluorinated, aliphatic, monohydric alcohol, a conformational switch from the predominantly $\alpha$-helical to $\beta$-sheeted state is observed with a complete loss of tertiary structure and proteolytic activity. This is contrary to the observation that alcohol induces mostly the $\alpha$helical structure in proteins. The O-state of ervatamin B in 50% methanol at pH 3.0 has enhanced the stability towards GuHCl denaturation and shows a biphasic transition. This suggests the presence of two structural parts with different stabilities that unfold in steps. The thermal unfolding of ervatamin B in the O-state is also biphasic, which confirms the presence of two domains in the enzyme structure that unfold sequentially. The differential stabilization of the structural parts may also be a reflection of the differential stabilization of local conformations in methanol. Thermal unfolding of ervatamin B in the absence of alcohol is cooperative, both at neutral and low pH, and can be fitted to a two state model. However, at pH 2.0 the calorimetric profiles show two peaks, which indicates the presence of two structural domains in the enzyme with different thermal stabilities that are denatured more or less independently. With an increase in pH to 3.0 and 4.0, the shape of the DSC profiles change, and the two peaks converge to a predominant single peak. However, the ratio of van't Hoff enthalpy to calorimetric enthalpy is approximated to 2.0, indicating non-cooperativity in thermal unfolding.
Purification and Characterization of a Collagenolytic Protease from the Filefish, Novoden modestrus
Kim, Se-Kwon;Park, Pyo-Jam;Kim, Jong-Bae;Shahidi, Fereidoon 165
A serine collagenolytic protease was purified from the internal organs of filefish Novoden modestrus, by ammonium sulfate, ion-exchange chromatography on a DEAE-Sephadex A-50, ion-exchange rechromatography on a DEAE-Sephadex A-50, and gel filtration on a Sephadex G-150 column. The molecular mass of the filefish serine collagenase was estimated to be 27.0 kDa by gel filtration and SDS-PAGE. The purified collagenase was optimally active at pH 7.0-8.0 and $55^{\circ}C$. The purified enzyme was rich in Ala, Ser, Leu, and Ile, but poor in Trp, Pro, Tyr, and Met. In addition, the purified collagenolytic enzyme was strongly inhibited by N-P-toluenesulfonyl-L-lysine chloromethyl ketone (TLCK), diisopropylfluorophosphate (DFP), and soybean trypsin inhibitor.
In Vivo Effects of CETP Inhibitory Peptides in Hypercholesterolemic Rabbit and Cholesteryl Ester Transfer Protein-Transgenic Mice
Cho, Kyung-Hyun;Shin, Yong-Won;Choi, Myung-Sook;Bok, Song-Hae;Jang, Sang-Hee;Park, Yong-Bok 172
We previously reported that cholesteryl ester transfer protein (CETP) inhibitory peptides (designated $P_{28}$ and $P_{10})$ have anti-atherogenic effects in hypercholesterolemic rabbits (Biochim. Biophys. Acta (1998) 1391, 133-144). To further investigate those effects, we studied rabbit plasma that was collected after 30 h of a $P_{28}$ or $P_{10}$ injection. We found that there is a strong correlation between the in vivo CETP inhibition effects and alterations of lipoprotein particle size distribution in rabbit plasma, as determined on an agarose gel electrophoresis and gel filtration column chromatography. In vivo effects of the peptide were observed again in C57BL/6 mice that expressed simian CETP. The $P_{28}$ or $P_{10}$ peptide ($7\;{\mu}g/g$ of body weight) that was dissolved in saline was injected subcutaneously into the mice. The $P_{28}$ injection caused the partial inhibition of plasma CETP activity up to 50%, decreasing the total plasma cholesterol concentration by 30%, and increasing the ratio of HD/total-cholesterol concentration by 150% in the CETP-transgenic (tg) mice. The CETP inhibition by the $P_{28}$ or $P_{10}$ made alterations that modulated the size re-distribution of the lipoproteins in the blood stream. Particle size of the very low (VLDL) and low density lipoproteins (LDL) from the peptide-injected group was highly decreased compared to the saline-injected group (determined on the gel filtration column chromatography). In contrast, The HDL particle size of the $P_{28}$-injected group increased compared to the control group (saline-injected). The expression level of the CETP mRNA of the $P_{28}$-injected CETP-tg mouse appeared lower than the saline-injected CETP-tg mouse. These results suggest that the injection of the CETP inhibitory peptide could affect the CETP expression level in the liver by influencing lipoprotein metabolism.
Structural Roles of Cysteine 50 and Cysteine 230 Residues in Arabidopsis thaliana S-Adenosylmethionine Decarboxylase
Park, Sung-Joon;Cho, Young-Dong 178
The Arabidopsis thaliana S-Adenosylmethionine decarboxylase (AdoMetDC) cDNA ($GenBank^{TM}$ U63633) was cloned. Site-specific mutagenesis was performed to introduce mutations at the conserved cysteine $Cys^{50}$, $Cys^{83}$, and $Cys^{230}$, and $lys^{81}$ residues. In accordance with the human AdoMetDC, the C50A and C230A mutagenesis had minimal effect on catalytic activity, which was further supported by DTNB-mediated inactivation and reactivation. However, unlike the human AdoMetDC, the $Cys^{50}$ and $Cys^{230}$ mutants were much more thermally unstable than the wild type and other mutant AdoMetDC, suggesting the structural significance of cysteines. Furthermore, according to a circular dichroism spectrum analysis, the $Cys^{50}$ and $Cys^{230}$ mutants show a higher a-helix content and lower coiled-coil content when compared to that of wild type and the other mutant AdoMetDC. Also, the three-dimensional structure of Arabidopsis thaliana AdoMetDC could further support all of the data presented here. Summarily, we suggest that the $Cys^{50}$ and $Cys^{230}$ residues are structurally important.
Interaction Between Acid-Labile Subunit and Insulin-like Growth Factor Binding Protein 3 Expressed in Xenopus Oocytes
Choi, Kyung-Yi;Lee, Dong-Hee 186
The acid-bible subunit (ALS) associates with the insulinlike growth factor (IGF)-I or II, and the IGF binding protein-3 (IGFBP-3) in order to form a 150-kD complex in the circulation. This complex may regulate the serum IGFs by restricting them in the vascular system and promoting their endocrine actions. Little is known about how ALS binds to IGFBP3, which connects the IGFs to ALS. Xenopus oocyte was utilized to study the function of ALS in assembling IGFs into the ternary complexes. Xenopus oocyte was shown to correctly translate in vitro transcribed mRNAs of ALS and IGFBP3. IGFBP3 and ALS mRNAs were injected in a mixture, and their products were immunoprecipitated by antisera against ALS and IGFBP3. Contrary to traditional reports that ALS interacts only with IGF-bound IGFBP3, this study shows that ALS is capable of forming a binary complex with IGFBP3 in the absence of IGF When cross-linked by disuccinimidyl suberate, the band that represents the ALS-IGFBP3 complex was evident on the PAGE. IGFBP3 movement was monitored according to the distribution between the hemispheres. Following a localized translation in the vegetal hemisphere, IGFBP3 remained in the vegetal half in the presence of ALS. However, the mutant IGFBP3 freely diffused into the animal half, despite the presence of ALS, which is different from the wild type IGFBP3. This study, therefore, suggests that ALS may play an important role in sequestering IGFBP3 polypeptides via the intermolecular aggregation. Studies using this heterologous model will lead to a better understanding of the IGFBP3 and ALS that assemble into the ternary structure and circulate the IGF system.
Cloning of the Large Subunit of Replication Protein A (RPA) from Yeast Saccharomyces cerevisiae and Its DNA Binding Activity through Redox Potential
Jeong, Haeng-Soon;Jeong, In-Chel;Kim, Andre;Kang, Shin-Won;Kang, Ho-Sung;Kim, Yung-Jin;Lee, Suk-Hee;Park, Jang-Su 194
Eukaryotic replication protein A (RPA) is a single-stranded(ss) DNA binding protein with multiple functions in DNA replication, repair, and genetic recombination. The 70-kDa subunit of eukaryotic RPA contains a conserved four cysteine-type zinc-finger motif that has been implicated in the regulation of DNA replication and repair. Recently, we described a novel function for the zinc-finger motif in the regulation of human RPA's ssDNA binding activity through reduction-oxidation (redox). Here, we show that yeast RPA's ssDNA binding activity is regulated by redox potential through its RPA32 and/or RPA14 subunits. Yeast RPA requires a reducing agent, such as dithiothreitol (DTT), for its ssDNA binding activity. Also, under non-reducing conditions, its DNA binding activity decreases 20 fold. In contrast, the RPA 70 subunit does not require DTT for its DNA binding activity and is not affected by the redox condition. These results suggest that all three subunits are required for the regulation of RPA's DNA binding activity through redox potential.
A Novel Anticoagulant Protein from Scapharca broughtonii
Jung, Won-Kyo;Je, Jae-Young;Kim, Hee-Ju;Kim, Se-Kwon 199
An anticoagulant protein was purified from the edible portion of a blood ark shell, Scapharca broughtonii, by ammonium sulfate precipitation and column chromatography on DEAE-Sephadex A-50, Sephadex G-75, DEAE-Sephacel, and Biogel P-l00. In vitro assays with human plasma, the anticoagulant from 'S. broughtonii, prolonged the activated partial thromboplastin time (APTT) and inhibited the factor LX in the intrinsic pathway of the blood coagulation cascade. But, the fibrin plate assay did not show that the anticoagulant is a fibrinolytic protease. The molecular mass of the purified S. broughtonii anticoagulant was measured to be about 26.0kDa by gel filtration on a Sephadex G-75 column and SDS-PAGE under denaturing conditions. The optimum activity in the APTT assay was exhibited at pH 7.0-7.5 and $40-45^{\circ}C$ in the presence of $Ca^{2+}$.
In Vitro Determination of Dengue Virus Type 2 NS2B-NS3 Protease Activity with Fluorescent Peptide Substrates
Khumthong, Rabuesak;Angsuthanasombat, Chanan;Panyim, Sakol;Katzenmeier, Gerd 206
The NS2B-NS3(pro) polyprotein segment from the dengue virus serotype 2 strain 16681 was purified from overexpressing E. coli by metal chelate affinity chromatography and gel filtration. Enzymatic activity of the refolded NS2B-NS3(pro) protease complex was determined in vitro with dansyl-labeled peptide substrates, based upon native dengue virus type 2 cleavage sites. The 12mer substrate peptides and the cleavage products could be separated by reversed-phase HPLC, and were identified by UV and fluorescence detection. All of the peptide substrates (representing the DEN polyprotein junction sequences at the NS2A/NS2B, NS2B/NS3, NS3/NS4A and NS4B/NS5 sites) were cleaved by the recombinant protease NS2B-NS3(pro). No cleavage was observed with an enzymatically inactive S135A mutant of the NS3 protein, or with a modified substrate peptide of the NS3/NS4A polyprotein site that contained a K2093A substitution. Enzymatic activity was dependent on the salt concentration. A 50% decrease of activity was observed in the presence of 0.1M sodium chloride. Our results show that the NS3 protease activity of the refolded NS2B-NS3(pro) protein can be assayed in vitro with high specificity by using cleavage-junction derived peptide substrates.
Rat Malonyl-CoA Decarboxylase; Cloning, Expression in E. coli and its Biochemical Characterization
Lee, Gha-Young;Bahk, Young-Yil;Kim, Yu-Sam 213
Malonyl-CoA decarboxylase (E.C.4.1.1.9) catalyzes the conversion of malonyl-CoA to acetyl-CoA. Although the metabolic role of this enzyme has not been fully defined, it has been reported that its deficiency is associated with mild mental retardation, seizures, hypotonia, cadiomyopathy, developmental delay, vomiting, hypoglycemia, metabolic acidosis, and malonic aciduria. Here, we isolated a cDNA clone for malonyl CoA decarboxylase from a rat brain cDNA library, expressed it in E. coli, and characterized its biochemical properties. The full-length cDNA contained a single open-reading frame that encoded 491 amino acid residues with a calculated molecular weight of 54, 762 Da. Its deduced amino acid sequence revealed a 65.6% identity to that from the goose uropigial gland. The sequence of the first 38 amino acids represents a putative mitochondrial targeting sequence, and the last 3 amino acid sequences (SKL) represent peroxisomal targeting ones. The expression of malonyl CoA decarboxylase was observed over a wide range of tissues as a single transcript of 2.0 kb in size. The recombinant protein that was expressed in E. coli was used to characterize the biochemical properties, which showed a typical Michaelis-Menten substrate saturation pattern. The $K_m$ and $V_{max}$ were calculated to be $68\;{\mu}M$ and $42.6\;{\mu}mol/min/mg$, respectively.
Characterization of Aspartate Aminotransferase Isoenzymes from Leaves of Lupinus albus L. cv Estoril
Martins, Maria Luisa Louro;De Freitas Barbosa, Miguel Pedro;De Varennes E Mendonca, Amarilis Paula Alberti 220
Two aspartate aminoransferase (EC 2.6.1.1) isoenzymes (AAT-1 and AAT-2) from Lupinus albus L. cv Estoril were separated, purified, and characterized. The molecular weight, pI value, optimum pH, optimum temperature, and thermodynamic parameters for thermal inactivation of both isoenzymes were obtained. Studies of the kinetic mechanism, and the kinetics of product inhibition and high substrate concentration inhibition, were performed. The effect of some divalent ions and irreversible inhibitors on both AAT isoenzymes was also studied. Native PAGE showed a higher molecular weight for AAT-2 compared with AAT-1. AAT-1 appears to be more anionic than AAT-2, which was suggested by the anion exchange chromatography. SDS-PAGE showed a similar sub-unit molecular weight for both isoenzymes. The optimum pH (between 8,0 and 9.0) and temperature ($60-65^{\circ}C$) were similar for both isoenzymes. In the temperature range of $45-65^{\circ}C$, AAT-2 has higher thermostability than AAT-1. Both isoenzymes showed a high affinity for keto-acid substrates, as well as a higher affinity to aspartate than glutamate. Manganese ions induced an increase in both AAT isoenzymes activities, but no cooperative effect was detected. Among the inhibitors tested, hydroxylamine affected both isoenzymes activity by an irreversible inhibition mechanism.
The Purification and Characterization of a Bacillus stearothermophilus Methionine Aminopeptidase (MetAP)
Chung, Jae-Min;Chung, Il-Yup;Lee, Young-Seek 228
Methionine aminopeptidase (MetAP) catalyzes the removal of an amino-terminal methionine from a newly synthesized polypeptide. The enzyme was purified to homogeneity from Bacillus stearothermophilus (KCTC 1752) by a procedure that involves heat precipitation and four sequential chromatographs (including DEAE-Sepharose ion exchange, hydroxylapatite, Ultrogel AcA 54 gel filtration, and Reactive red 120 dye affinity chromatography). The apparent molecular masses of the enzyme were 81,300 Da and 41,000 Da, as determined by gel filtration chromatography and sodium dodecylsulfate polyacrylamide gel electrophoresis (SDS-PAGE), respectively. This indicates that the enzyme is comprised of two identical subunits. The MetAP specifically hydrolyzed the N-terminal residue of Met-Ala-Ser that was used as a substrate, and exhibited a strong preference for Met-Ala-Ser over Leu-Gly-Gly, Leu-Ser-Phe, and Leu-Leu-Tyr. The enzyme has an optimal pH at 8.0, an optimal temperature at $80^{\circ}C$, and pI at 4.1. The enzyme was heat-stable, as its activity remained unaltered when incubated at $80^{\circ}C$ for 45 min. The Km and Vmax values of the enzyme were 3.0mM and 1.7 mmol/min/mg, respectively. The B. stearothernmophilus MetAP was completely inactivated by EDTA and required $Co^{2+}$ ion(s) for activation, suggesting the metal dependence of this enzyme.
Relationship Between Acrylamide Concentration and Enzymatic Activity in An Improved Single Fibrin Zymogram Gel System
Choi, Nack-Shick;Kim, Byoung-Young;Lee, Jin-Young;Yoon, Kab-Seog;Han, Kyoung-Yoen;Kim, Seung-Ho 236
Based on the zymography analysis, Bacillus sp. DJ-4 (screened from Doen-Jang, a Korean traditional fermented food) secretes seven extracellular fibrinolytic enzymes (EFEs; 68, 64, 55, 45, 33, 27, and 13 kDa) in culture broth. These seven EFEs were analyzed by newly applied SDS-fibrin zymography combined with gradient polyacrylamide (SDS-FZGP). This improved gel system was used with a 5-20% acrylamide gradient in a fibrin zymogram gel for the separation of proteins with molecular masses from below 10kDa to over 100kDa on one gel plate. Using this system, high molecular weight bands (HMWBs) were clearly and sharply resolved. We also examined the relationship between an acrylamide concentration and the enzymatic activity of EFE using densitometric analysis.
Structure and Activity of Angiotensin I Converting Enzyme Inhibitory Peptides Derived from Alaskan Pollack Skin
Byun, Hee-Guk;Kim, Se-Kwon 239
Angiotensin I that converts the enzyme (ACE) inhibitory peptide, Gly-Pro-Leu, previously purified and identified from the Alaskan pollack skin gelatin hydrolysate, were synthesized. In addition, the peptides Gly-Leu-Pro, Leu-Gly-Pro, Leu-Pro-Gly, Pro-Gly-Leu, Pro-Leu-Gly, Gly-Pro, and Pro-Leu, which consisted of glycine, proline, and leucine, were synthesized by the solid-phase method. The $IC_{50}$ values of each tripeptide - namely Leu-Gly-Pro, Gly-Leu-Pro, Gly-Pro-Leu, Pro-Leu-Gly, Leu-Pro-Gly, and Pro-Gly-Leu - were 0.72, 1.62, 2.65, 4.74, 5.73, and $13.93{\mu}M$, respectively. The ACE inhibitory activity of these tripeptides was higher than that of dipeptides, such as Gly-Pro and Pro-Leu with $IC_{50}$ values of 252.6 and $337.3\;{\mu}M$, respectively. Among the tripeptides, Leu-Gly-Pro and Gly-Leu-Pro had higher inhibitory activity than Gly-Pro-Leu that was isolated from the Alaskan pollack skin gelatin hydrolysate. Among the different types of tripeptides that were examined, the highest ACE inhibitory activity was observed for Leu-Gly-Pro. It had the leucine residue at the N-terminal and proline residue at the C-terminal.
Mutations within the Putative Active Site of Heterodimeric Deoxyguanosine Kinase Block the Allosteric Activation of the Deoxyadenosine Kinase Subunit
Park, In-Shik;Ives, David H. 244
Replacement of the Asp-84 residue of the deoxyguanosine kinase subunit of the tandem deoxyadenosine kinase/deoxyguanosine kinase (dAK/dGK) from Lactobacillus acidophilus R-26 by Ala, Asn, or Glu produced increased $K_m$ values for deoxyguanosine on dGK. However, it did not seem to affect the binding of Mg-ATP. The Asp-84 dGK replacements bad no apparent effect on the binding of deoxyadenosine by dAK. However, the mutant dGKs were no longer inhibited by dGTP, normally a potent distal end-product inhibitor of dGK. Moreover, the allosteric activation of dAK activity by dGTP or dGuo was lost in the modified heterodimeric dAK/dGK enzyme. Therefore, it seems very likely that Asp-84 participates in dGuo binding at the active site of the dGK subunit of dAK/dGK from Lactobacillus acidophilus R-26.
A Simple Method for Elimination of False Positive Results in RT-PCR
Martel, Fatima;Grundemann, Dirk;Schomig, Edgar 248
Discrimination between the amplification of mRNA and contaminating genomic DNA is a common problem when performing a reverse transcriptase-polymerase chain reaction (RT-PCR). Even after treatment of the samples with DNAse, it is possible that negative controls (samples in which no reverse transcriptase was added) will give positive results. This indicates that there was amplification of DNA, which was not generated during the reverse transcriptase step. The possibility exists that Taq DNA polymerase acts as a reverse transcriptase, generating cDNA from RNA during the PCR step. In order to test this hypothesis, we incubated samples with a DNAse-free RNAse after the cDNA synthesis. Comparison of the results that were obtained from these samples (incubated with or without DNAse-free RNAse) confirms that the reverse transcriptase activity of Taq DNA polymerase I is a possible source of false positive results when performing RT-PCR from intronless genes. Moreover, we describe here a simple and rapid method to overcome the false positive results that originate by this activity of Taq polymerase. | CommonCrawl |
Mp00081: Standard tableaux —reading word permutation⟶ Permutations
Mp00235: Permutations —descent views to invisible inversion bottoms⟶ Permutations
[[1]]=>[1]=>[1]=>{{1}} [[1,2]]=>[1,2]=>[1,2]=>{{1},{2}} [[1],[2]]=>[2,1]=>[2,1]=>{{1,2}} [[1,2,3]]=>[1,2,3]=>[1,2,3]=>{{1},{2},{3}} [[1,3],[2]]=>[2,1,3]=>[2,1,3]=>{{1,2},{3}} [[1,2],[3]]=>[3,1,2]=>[3,1,2]=>{{1,2,3}} [[1],[2],[3]]=>[3,2,1]=>[2,3,1]=>{{1,2,3}} [[1,2,3,4]]=>[1,2,3,4]=>[1,2,3,4]=>{{1},{2},{3},{4}} [[1,3,4],[2]]=>[2,1,3,4]=>[2,1,3,4]=>{{1,2},{3},{4}} [[1,2,4],[3]]=>[3,1,2,4]=>[3,1,2,4]=>{{1,2,3},{4}} [[1,2,3],[4]]=>[4,1,2,3]=>[4,1,2,3]=>{{1,2,3,4}} [[1,3],[2,4]]=>[2,4,1,3]=>[4,2,1,3]=>{{1,3,4},{2}} [[1,2],[3,4]]=>[3,4,1,2]=>[4,1,3,2]=>{{1,2,4},{3}} [[1,4],[2],[3]]=>[3,2,1,4]=>[2,3,1,4]=>{{1,2,3},{4}} [[1,3],[2],[4]]=>[4,2,1,3]=>[2,4,1,3]=>{{1,2,3,4}} [[1,2],[3],[4]]=>[4,3,1,2]=>[3,1,4,2]=>{{1,2,3,4}} [[1],[2],[3],[4]]=>[4,3,2,1]=>[2,3,4,1]=>{{1,2,3,4}} [[1,2,3,4,5]]=>[1,2,3,4,5]=>[1,2,3,4,5]=>{{1},{2},{3},{4},{5}} [[1,3,4,5],[2]]=>[2,1,3,4,5]=>[2,1,3,4,5]=>{{1,2},{3},{4},{5}} [[1,2,4,5],[3]]=>[3,1,2,4,5]=>[3,1,2,4,5]=>{{1,2,3},{4},{5}} [[1,2,3,5],[4]]=>[4,1,2,3,5]=>[4,1,2,3,5]=>{{1,2,3,4},{5}} [[1,2,3,4],[5]]=>[5,1,2,3,4]=>[5,1,2,3,4]=>{{1,2,3,4,5}} [[1,3,5],[2,4]]=>[2,4,1,3,5]=>[4,2,1,3,5]=>{{1,3,4},{2},{5}} [[1,2,5],[3,4]]=>[3,4,1,2,5]=>[4,1,3,2,5]=>{{1,2,4},{3},{5}} [[1,3,4],[2,5]]=>[2,5,1,3,4]=>[5,2,1,3,4]=>{{1,3,4,5},{2}} [[1,2,4],[3,5]]=>[3,5,1,2,4]=>[5,1,3,2,4]=>{{1,2,4,5},{3}} [[1,2,3],[4,5]]=>[4,5,1,2,3]=>[5,1,2,4,3]=>{{1,2,3,5},{4}} [[1,4,5],[2],[3]]=>[3,2,1,4,5]=>[2,3,1,4,5]=>{{1,2,3},{4},{5}} [[1,3,5],[2],[4]]=>[4,2,1,3,5]=>[2,4,1,3,5]=>{{1,2,3,4},{5}} [[1,2,5],[3],[4]]=>[4,3,1,2,5]=>[3,1,4,2,5]=>{{1,2,3,4},{5}} [[1,3,4],[2],[5]]=>[5,2,1,3,4]=>[2,5,1,3,4]=>{{1,2,3,4,5}} [[1,2,4],[3],[5]]=>[5,3,1,2,4]=>[3,1,5,2,4]=>{{1,2,3,4,5}} [[1,2,3],[4],[5]]=>[5,4,1,2,3]=>[4,1,2,5,3]=>{{1,2,3,4,5}} [[1,4],[2,5],[3]]=>[3,2,5,1,4]=>[5,3,2,1,4]=>{{1,4,5},{2,3}} [[1,3],[2,5],[4]]=>[4,2,5,1,3]=>[5,4,1,2,3]=>{{1,3,5},{2,4}} [[1,2],[3,5],[4]]=>[4,3,5,1,2]=>[5,1,4,3,2]=>{{1,2,5},{3,4}} [[1,3],[2,4],[5]]=>[5,2,4,1,3]=>[4,5,1,2,3]=>{{1,2,3,4,5}} [[1,2],[3,4],[5]]=>[5,3,4,1,2]=>[4,1,5,3,2]=>{{1,2,3,4,5}} [[1,5],[2],[3],[4]]=>[4,3,2,1,5]=>[2,3,4,1,5]=>{{1,2,3,4},{5}} [[1,4],[2],[3],[5]]=>[5,3,2,1,4]=>[2,3,5,1,4]=>{{1,2,3,4,5}} [[1,3],[2],[4],[5]]=>[5,4,2,1,3]=>[2,4,1,5,3]=>{{1,2,3,4,5}} [[1,2],[3],[4],[5]]=>[5,4,3,1,2]=>[3,1,4,5,2]=>{{1,2,3,4,5}} [[1],[2],[3],[4],[5]]=>[5,4,3,2,1]=>[2,3,4,5,1]=>{{1,2,3,4,5}} [[1,2,3,4,5,6]]=>[1,2,3,4,5,6]=>[1,2,3,4,5,6]=>{{1},{2},{3},{4},{5},{6}} [[1,3,4,5,6],[2]]=>[2,1,3,4,5,6]=>[2,1,3,4,5,6]=>{{1,2},{3},{4},{5},{6}} [[1,2,4,5,6],[3]]=>[3,1,2,4,5,6]=>[3,1,2,4,5,6]=>{{1,2,3},{4},{5},{6}} [[1,2,3,5,6],[4]]=>[4,1,2,3,5,6]=>[4,1,2,3,5,6]=>{{1,2,3,4},{5},{6}} [[1,2,3,4,6],[5]]=>[5,1,2,3,4,6]=>[5,1,2,3,4,6]=>{{1,2,3,4,5},{6}} [[1,2,3,4,5],[6]]=>[6,1,2,3,4,5]=>[6,1,2,3,4,5]=>{{1,2,3,4,5,6}} [[1,3,5,6],[2,4]]=>[2,4,1,3,5,6]=>[4,2,1,3,5,6]=>{{1,3,4},{2},{5},{6}} [[1,2,5,6],[3,4]]=>[3,4,1,2,5,6]=>[4,1,3,2,5,6]=>{{1,2,4},{3},{5},{6}} [[1,3,4,6],[2,5]]=>[2,5,1,3,4,6]=>[5,2,1,3,4,6]=>{{1,3,4,5},{2},{6}} [[1,2,4,6],[3,5]]=>[3,5,1,2,4,6]=>[5,1,3,2,4,6]=>{{1,2,4,5},{3},{6}} [[1,2,3,6],[4,5]]=>[4,5,1,2,3,6]=>[5,1,2,4,3,6]=>{{1,2,3,5},{4},{6}} [[1,3,4,5],[2,6]]=>[2,6,1,3,4,5]=>[6,2,1,3,4,5]=>{{1,3,4,5,6},{2}} [[1,2,4,5],[3,6]]=>[3,6,1,2,4,5]=>[6,1,3,2,4,5]=>{{1,2,4,5,6},{3}} [[1,2,3,5],[4,6]]=>[4,6,1,2,3,5]=>[6,1,2,4,3,5]=>{{1,2,3,5,6},{4}} [[1,2,3,4],[5,6]]=>[5,6,1,2,3,4]=>[6,1,2,3,5,4]=>{{1,2,3,4,6},{5}} [[1,4,5,6],[2],[3]]=>[3,2,1,4,5,6]=>[2,3,1,4,5,6]=>{{1,2,3},{4},{5},{6}} [[1,3,5,6],[2],[4]]=>[4,2,1,3,5,6]=>[2,4,1,3,5,6]=>{{1,2,3,4},{5},{6}} [[1,2,5,6],[3],[4]]=>[4,3,1,2,5,6]=>[3,1,4,2,5,6]=>{{1,2,3,4},{5},{6}} [[1,3,4,6],[2],[5]]=>[5,2,1,3,4,6]=>[2,5,1,3,4,6]=>{{1,2,3,4,5},{6}} [[1,2,4,6],[3],[5]]=>[5,3,1,2,4,6]=>[3,1,5,2,4,6]=>{{1,2,3,4,5},{6}} [[1,2,3,6],[4],[5]]=>[5,4,1,2,3,6]=>[4,1,2,5,3,6]=>{{1,2,3,4,5},{6}} [[1,3,4,5],[2],[6]]=>[6,2,1,3,4,5]=>[2,6,1,3,4,5]=>{{1,2,3,4,5,6}} [[1,2,4,5],[3],[6]]=>[6,3,1,2,4,5]=>[3,1,6,2,4,5]=>{{1,2,3,4,5,6}} [[1,2,3,5],[4],[6]]=>[6,4,1,2,3,5]=>[4,1,2,6,3,5]=>{{1,2,3,4,5,6}} [[1,2,3,4],[5],[6]]=>[6,5,1,2,3,4]=>[5,1,2,3,6,4]=>{{1,2,3,4,5,6}} [[1,3,5],[2,4,6]]=>[2,4,6,1,3,5]=>[6,2,1,4,3,5]=>{{1,3,5,6},{2},{4}} [[1,2,5],[3,4,6]]=>[3,4,6,1,2,5]=>[6,1,3,4,2,5]=>{{1,2,5,6},{3},{4}} [[1,3,4],[2,5,6]]=>[2,5,6,1,3,4]=>[6,2,1,3,5,4]=>{{1,3,4,6},{2},{5}} [[1,2,4],[3,5,6]]=>[3,5,6,1,2,4]=>[6,1,3,2,5,4]=>{{1,2,4,6},{3},{5}} [[1,2,3],[4,5,6]]=>[4,5,6,1,2,3]=>[6,1,2,4,5,3]=>{{1,2,3,6},{4},{5}} [[1,4,6],[2,5],[3]]=>[3,2,5,1,4,6]=>[5,3,2,1,4,6]=>{{1,4,5},{2,3},{6}} [[1,3,6],[2,5],[4]]=>[4,2,5,1,3,6]=>[5,4,1,2,3,6]=>{{1,3,5},{2,4},{6}} [[1,2,6],[3,5],[4]]=>[4,3,5,1,2,6]=>[5,1,4,3,2,6]=>{{1,2,5},{3,4},{6}} [[1,3,6],[2,4],[5]]=>[5,2,4,1,3,6]=>[4,5,1,2,3,6]=>{{1,2,3,4,5},{6}} [[1,2,6],[3,4],[5]]=>[5,3,4,1,2,6]=>[4,1,5,3,2,6]=>{{1,2,3,4,5},{6}} [[1,4,5],[2,6],[3]]=>[3,2,6,1,4,5]=>[6,3,2,1,4,5]=>{{1,4,5,6},{2,3}} [[1,3,5],[2,6],[4]]=>[4,2,6,1,3,5]=>[6,4,1,2,3,5]=>{{1,3,5,6},{2,4}} [[1,2,5],[3,6],[4]]=>[4,3,6,1,2,5]=>[6,1,4,3,2,5]=>{{1,2,5,6},{3,4}} [[1,3,4],[2,6],[5]]=>[5,2,6,1,3,4]=>[6,5,1,3,2,4]=>{{1,3,4,6},{2,5}} [[1,2,4],[3,6],[5]]=>[5,3,6,1,2,4]=>[6,1,5,2,3,4]=>{{1,2,4,6},{3,5}} [[1,2,3],[4,6],[5]]=>[5,4,6,1,2,3]=>[6,1,2,5,4,3]=>{{1,2,3,6},{4,5}} [[1,3,5],[2,4],[6]]=>[6,2,4,1,3,5]=>[4,6,1,2,3,5]=>{{1,2,3,4,5,6}} [[1,2,5],[3,4],[6]]=>[6,3,4,1,2,5]=>[4,1,6,3,2,5]=>{{1,2,3,4,5,6}} [[1,3,4],[2,5],[6]]=>[6,2,5,1,3,4]=>[5,6,1,3,2,4]=>{{1,2,3,4,5,6}} [[1,2,4],[3,5],[6]]=>[6,3,5,1,2,4]=>[5,1,6,2,3,4]=>{{1,2,3,4,5,6}} [[1,2,3],[4,5],[6]]=>[6,4,5,1,2,3]=>[5,1,2,6,4,3]=>{{1,2,3,4,5,6}} [[1,5,6],[2],[3],[4]]=>[4,3,2,1,5,6]=>[2,3,4,1,5,6]=>{{1,2,3,4},{5},{6}} [[1,4,6],[2],[3],[5]]=>[5,3,2,1,4,6]=>[2,3,5,1,4,6]=>{{1,2,3,4,5},{6}} [[1,3,6],[2],[4],[5]]=>[5,4,2,1,3,6]=>[2,4,1,5,3,6]=>{{1,2,3,4,5},{6}} [[1,2,6],[3],[4],[5]]=>[5,4,3,1,2,6]=>[3,1,4,5,2,6]=>{{1,2,3,4,5},{6}} [[1,4,5],[2],[3],[6]]=>[6,3,2,1,4,5]=>[2,3,6,1,4,5]=>{{1,2,3,4,5,6}} [[1,3,5],[2],[4],[6]]=>[6,4,2,1,3,5]=>[2,4,1,6,3,5]=>{{1,2,3,4,5,6}} [[1,2,5],[3],[4],[6]]=>[6,4,3,1,2,5]=>[3,1,4,6,2,5]=>{{1,2,3,4,5,6}} [[1,3,4],[2],[5],[6]]=>[6,5,2,1,3,4]=>[2,5,1,3,6,4]=>{{1,2,3,4,5,6}} [[1,2,4],[3],[5],[6]]=>[6,5,3,1,2,4]=>[3,1,5,2,6,4]=>{{1,2,3,4,5,6}} [[1,2,3],[4],[5],[6]]=>[6,5,4,1,2,3]=>[4,1,2,5,6,3]=>{{1,2,3,4,5,6}} [[1,4],[2,5],[3,6]]=>[3,6,2,5,1,4]=>[5,6,3,1,2,4]=>{{1,2,4,5,6},{3}} [[1,3],[2,5],[4,6]]=>[4,6,2,5,1,3]=>[5,6,1,4,2,3]=>{{1,2,3,5,6},{4}} [[1,2],[3,5],[4,6]]=>[4,6,3,5,1,2]=>[5,1,6,4,3,2]=>{{1,2,3,5,6},{4}} [[1,3],[2,4],[5,6]]=>[5,6,2,4,1,3]=>[4,6,1,2,5,3]=>{{1,2,3,4,6},{5}} [[1,2],[3,4],[5,6]]=>[5,6,3,4,1,2]=>[4,1,6,3,5,2]=>{{1,2,3,4,6},{5}} [[1,5],[2,6],[3],[4]]=>[4,3,2,6,1,5]=>[6,3,4,2,1,5]=>{{1,5,6},{2,3,4}} [[1,4],[2,6],[3],[5]]=>[5,3,2,6,1,4]=>[6,3,5,1,2,4]=>{{1,4,6},{2,3,5}} [[1,3],[2,6],[4],[5]]=>[5,4,2,6,1,3]=>[6,4,1,5,2,3]=>{{1,3,6},{2,4,5}} [[1,2],[3,6],[4],[5]]=>[5,4,3,6,1,2]=>[6,1,4,5,3,2]=>{{1,2,6},{3,4,5}} [[1,4],[2,5],[3],[6]]=>[6,3,2,5,1,4]=>[5,3,6,1,2,4]=>{{1,2,3,4,5,6}} [[1,3],[2,5],[4],[6]]=>[6,4,2,5,1,3]=>[5,4,1,6,2,3]=>{{1,2,3,4,5,6}} [[1,2],[3,5],[4],[6]]=>[6,4,3,5,1,2]=>[5,1,4,6,3,2]=>{{1,2,3,4,5,6}} [[1,3],[2,4],[5],[6]]=>[6,5,2,4,1,3]=>[4,5,1,2,6,3]=>{{1,2,3,4,5,6}} [[1,2],[3,4],[5],[6]]=>[6,5,3,4,1,2]=>[4,1,5,3,6,2]=>{{1,2,3,4,5,6}} [[1,6],[2],[3],[4],[5]]=>[5,4,3,2,1,6]=>[2,3,4,5,1,6]=>{{1,2,3,4,5},{6}} [[1,5],[2],[3],[4],[6]]=>[6,4,3,2,1,5]=>[2,3,4,6,1,5]=>{{1,2,3,4,5,6}} [[1,4],[2],[3],[5],[6]]=>[6,5,3,2,1,4]=>[2,3,5,1,6,4]=>{{1,2,3,4,5,6}} [[1,3],[2],[4],[5],[6]]=>[6,5,4,2,1,3]=>[2,4,1,5,6,3]=>{{1,2,3,4,5,6}} [[1,2],[3],[4],[5],[6]]=>[6,5,4,3,1,2]=>[3,1,4,5,6,2]=>{{1,2,3,4,5,6}} [[1],[2],[3],[4],[5],[6]]=>[6,5,4,3,2,1]=>[2,3,4,5,6,1]=>{{1,2,3,4,5,6}} [[1,2,3,4,5,6,7]]=>[1,2,3,4,5,6,7]=>[1,2,3,4,5,6,7]=>{{1},{2},{3},{4},{5},{6},{7}} [[1,3,4,5,6,7],[2]]=>[2,1,3,4,5,6,7]=>[2,1,3,4,5,6,7]=>{{1,2},{3},{4},{5},{6},{7}} [[1,2,4,5,6,7],[3]]=>[3,1,2,4,5,6,7]=>[3,1,2,4,5,6,7]=>{{1,2,3},{4},{5},{6},{7}} [[1,2,3,5,6,7],[4]]=>[4,1,2,3,5,6,7]=>[4,1,2,3,5,6,7]=>{{1,2,3,4},{5},{6},{7}} [[1,2,3,4,6,7],[5]]=>[5,1,2,3,4,6,7]=>[5,1,2,3,4,6,7]=>{{1,2,3,4,5},{6},{7}} [[1,2,3,4,5,7],[6]]=>[6,1,2,3,4,5,7]=>[6,1,2,3,4,5,7]=>{{1,2,3,4,5,6},{7}} [[1,2,3,4,5,6],[7]]=>[7,1,2,3,4,5,6]=>[7,1,2,3,4,5,6]=>{{1,2,3,4,5,6,7}} [[1,3,5,6,7],[2,4]]=>[2,4,1,3,5,6,7]=>[4,2,1,3,5,6,7]=>{{1,3,4},{2},{5},{6},{7}} [[1,2,5,6,7],[3,4]]=>[3,4,1,2,5,6,7]=>[4,1,3,2,5,6,7]=>{{1,2,4},{3},{5},{6},{7}} [[1,3,4,6,7],[2,5]]=>[2,5,1,3,4,6,7]=>[5,2,1,3,4,6,7]=>{{1,3,4,5},{2},{6},{7}} [[1,2,4,6,7],[3,5]]=>[3,5,1,2,4,6,7]=>[5,1,3,2,4,6,7]=>{{1,2,4,5},{3},{6},{7}} [[1,2,3,6,7],[4,5]]=>[4,5,1,2,3,6,7]=>[5,1,2,4,3,6,7]=>{{1,2,3,5},{4},{6},{7}} [[1,3,4,5,7],[2,6]]=>[2,6,1,3,4,5,7]=>[6,2,1,3,4,5,7]=>{{1,3,4,5,6},{2},{7}} [[1,2,4,5,7],[3,6]]=>[3,6,1,2,4,5,7]=>[6,1,3,2,4,5,7]=>{{1,2,4,5,6},{3},{7}} [[1,2,3,5,7],[4,6]]=>[4,6,1,2,3,5,7]=>[6,1,2,4,3,5,7]=>{{1,2,3,5,6},{4},{7}} [[1,2,3,4,7],[5,6]]=>[5,6,1,2,3,4,7]=>[6,1,2,3,5,4,7]=>{{1,2,3,4,6},{5},{7}} [[1,3,4,5,6],[2,7]]=>[2,7,1,3,4,5,6]=>[7,2,1,3,4,5,6]=>{{1,3,4,5,6,7},{2}} [[1,2,4,5,6],[3,7]]=>[3,7,1,2,4,5,6]=>[7,1,3,2,4,5,6]=>{{1,2,4,5,6,7},{3}} [[1,2,3,5,6],[4,7]]=>[4,7,1,2,3,5,6]=>[7,1,2,4,3,5,6]=>{{1,2,3,5,6,7},{4}} [[1,2,3,4,6],[5,7]]=>[5,7,1,2,3,4,6]=>[7,1,2,3,5,4,6]=>{{1,2,3,4,6,7},{5}} [[1,2,3,4,5],[6,7]]=>[6,7,1,2,3,4,5]=>[7,1,2,3,4,6,5]=>{{1,2,3,4,5,7},{6}} [[1,4,5,6,7],[2],[3]]=>[3,2,1,4,5,6,7]=>[2,3,1,4,5,6,7]=>{{1,2,3},{4},{5},{6},{7}} [[1,3,5,6,7],[2],[4]]=>[4,2,1,3,5,6,7]=>[2,4,1,3,5,6,7]=>{{1,2,3,4},{5},{6},{7}} [[1,2,5,6,7],[3],[4]]=>[4,3,1,2,5,6,7]=>[3,1,4,2,5,6,7]=>{{1,2,3,4},{5},{6},{7}} [[1,3,4,6,7],[2],[5]]=>[5,2,1,3,4,6,7]=>[2,5,1,3,4,6,7]=>{{1,2,3,4,5},{6},{7}} [[1,2,4,6,7],[3],[5]]=>[5,3,1,2,4,6,7]=>[3,1,5,2,4,6,7]=>{{1,2,3,4,5},{6},{7}} [[1,2,3,6,7],[4],[5]]=>[5,4,1,2,3,6,7]=>[4,1,2,5,3,6,7]=>{{1,2,3,4,5},{6},{7}} [[1,3,4,5,7],[2],[6]]=>[6,2,1,3,4,5,7]=>[2,6,1,3,4,5,7]=>{{1,2,3,4,5,6},{7}} [[1,2,4,5,7],[3],[6]]=>[6,3,1,2,4,5,7]=>[3,1,6,2,4,5,7]=>{{1,2,3,4,5,6},{7}} [[1,2,3,5,7],[4],[6]]=>[6,4,1,2,3,5,7]=>[4,1,2,6,3,5,7]=>{{1,2,3,4,5,6},{7}} [[1,2,3,4,7],[5],[6]]=>[6,5,1,2,3,4,7]=>[5,1,2,3,6,4,7]=>{{1,2,3,4,5,6},{7}} [[1,3,4,5,6],[2],[7]]=>[7,2,1,3,4,5,6]=>[2,7,1,3,4,5,6]=>{{1,2,3,4,5,6,7}} [[1,2,4,5,6],[3],[7]]=>[7,3,1,2,4,5,6]=>[3,1,7,2,4,5,6]=>{{1,2,3,4,5,6,7}} [[1,2,3,5,6],[4],[7]]=>[7,4,1,2,3,5,6]=>[4,1,2,7,3,5,6]=>{{1,2,3,4,5,6,7}} [[1,2,3,4,6],[5],[7]]=>[7,5,1,2,3,4,6]=>[5,1,2,3,7,4,6]=>{{1,2,3,4,5,6,7}} [[1,2,3,4,5],[6],[7]]=>[7,6,1,2,3,4,5]=>[6,1,2,3,4,7,5]=>{{1,2,3,4,5,6,7}} [[1,3,5,7],[2,4,6]]=>[2,4,6,1,3,5,7]=>[6,2,1,4,3,5,7]=>{{1,3,5,6},{2},{4},{7}} [[1,2,5,7],[3,4,6]]=>[3,4,6,1,2,5,7]=>[6,1,3,4,2,5,7]=>{{1,2,5,6},{3},{4},{7}} [[1,3,4,7],[2,5,6]]=>[2,5,6,1,3,4,7]=>[6,2,1,3,5,4,7]=>{{1,3,4,6},{2},{5},{7}} [[1,2,4,7],[3,5,6]]=>[3,5,6,1,2,4,7]=>[6,1,3,2,5,4,7]=>{{1,2,4,6},{3},{5},{7}} [[1,2,3,7],[4,5,6]]=>[4,5,6,1,2,3,7]=>[6,1,2,4,5,3,7]=>{{1,2,3,6},{4},{5},{7}} [[1,3,5,6],[2,4,7]]=>[2,4,7,1,3,5,6]=>[7,2,1,4,3,5,6]=>{{1,3,5,6,7},{2},{4}} [[1,2,5,6],[3,4,7]]=>[3,4,7,1,2,5,6]=>[7,1,3,4,2,5,6]=>{{1,2,5,6,7},{3},{4}} [[1,3,4,6],[2,5,7]]=>[2,5,7,1,3,4,6]=>[7,2,1,3,5,4,6]=>{{1,3,4,6,7},{2},{5}} [[1,2,4,6],[3,5,7]]=>[3,5,7,1,2,4,6]=>[7,1,3,2,5,4,6]=>{{1,2,4,6,7},{3},{5}} [[1,2,3,6],[4,5,7]]=>[4,5,7,1,2,3,6]=>[7,1,2,4,5,3,6]=>{{1,2,3,6,7},{4},{5}} [[1,3,4,5],[2,6,7]]=>[2,6,7,1,3,4,5]=>[7,2,1,3,4,6,5]=>{{1,3,4,5,7},{2},{6}} [[1,2,4,5],[3,6,7]]=>[3,6,7,1,2,4,5]=>[7,1,3,2,4,6,5]=>{{1,2,4,5,7},{3},{6}} [[1,2,3,5],[4,6,7]]=>[4,6,7,1,2,3,5]=>[7,1,2,4,3,6,5]=>{{1,2,3,5,7},{4},{6}} [[1,2,3,4],[5,6,7]]=>[5,6,7,1,2,3,4]=>[7,1,2,3,5,6,4]=>{{1,2,3,4,7},{5},{6}} [[1,4,6,7],[2,5],[3]]=>[3,2,5,1,4,6,7]=>[5,3,2,1,4,6,7]=>{{1,4,5},{2,3},{6},{7}} [[1,3,6,7],[2,5],[4]]=>[4,2,5,1,3,6,7]=>[5,4,1,2,3,6,7]=>{{1,3,5},{2,4},{6},{7}} [[1,2,6,7],[3,5],[4]]=>[4,3,5,1,2,6,7]=>[5,1,4,3,2,6,7]=>{{1,2,5},{3,4},{6},{7}} [[1,3,6,7],[2,4],[5]]=>[5,2,4,1,3,6,7]=>[4,5,1,2,3,6,7]=>{{1,2,3,4,5},{6},{7}} [[1,2,6,7],[3,4],[5]]=>[5,3,4,1,2,6,7]=>[4,1,5,3,2,6,7]=>{{1,2,3,4,5},{6},{7}} [[1,4,5,7],[2,6],[3]]=>[3,2,6,1,4,5,7]=>[6,3,2,1,4,5,7]=>{{1,4,5,6},{2,3},{7}} [[1,3,5,7],[2,6],[4]]=>[4,2,6,1,3,5,7]=>[6,4,1,2,3,5,7]=>{{1,3,5,6},{2,4},{7}} [[1,2,5,7],[3,6],[4]]=>[4,3,6,1,2,5,7]=>[6,1,4,3,2,5,7]=>{{1,2,5,6},{3,4},{7}} [[1,3,4,7],[2,6],[5]]=>[5,2,6,1,3,4,7]=>[6,5,1,3,2,4,7]=>{{1,3,4,6},{2,5},{7}} [[1,2,4,7],[3,6],[5]]=>[5,3,6,1,2,4,7]=>[6,1,5,2,3,4,7]=>{{1,2,4,6},{3,5},{7}} [[1,2,3,7],[4,6],[5]]=>[5,4,6,1,2,3,7]=>[6,1,2,5,4,3,7]=>{{1,2,3,6},{4,5},{7}} [[1,3,5,7],[2,4],[6]]=>[6,2,4,1,3,5,7]=>[4,6,1,2,3,5,7]=>{{1,2,3,4,5,6},{7}} [[1,2,5,7],[3,4],[6]]=>[6,3,4,1,2,5,7]=>[4,1,6,3,2,5,7]=>{{1,2,3,4,5,6},{7}} [[1,3,4,7],[2,5],[6]]=>[6,2,5,1,3,4,7]=>[5,6,1,3,2,4,7]=>{{1,2,3,4,5,6},{7}} [[1,2,4,7],[3,5],[6]]=>[6,3,5,1,2,4,7]=>[5,1,6,2,3,4,7]=>{{1,2,3,4,5,6},{7}} [[1,2,3,7],[4,5],[6]]=>[6,4,5,1,2,3,7]=>[5,1,2,6,4,3,7]=>{{1,2,3,4,5,6},{7}} [[1,4,5,6],[2,7],[3]]=>[3,2,7,1,4,5,6]=>[7,3,2,1,4,5,6]=>{{1,4,5,6,7},{2,3}} [[1,3,5,6],[2,7],[4]]=>[4,2,7,1,3,5,6]=>[7,4,1,2,3,5,6]=>{{1,3,5,6,7},{2,4}} [[1,2,5,6],[3,7],[4]]=>[4,3,7,1,2,5,6]=>[7,1,4,3,2,5,6]=>{{1,2,5,6,7},{3,4}} [[1,3,4,6],[2,7],[5]]=>[5,2,7,1,3,4,6]=>[7,5,1,3,2,4,6]=>{{1,3,4,6,7},{2,5}} [[1,2,4,6],[3,7],[5]]=>[5,3,7,1,2,4,6]=>[7,1,5,2,3,4,6]=>{{1,2,4,6,7},{3,5}} [[1,2,3,6],[4,7],[5]]=>[5,4,7,1,2,3,6]=>[7,1,2,5,4,3,6]=>{{1,2,3,6,7},{4,5}} [[1,3,4,5],[2,7],[6]]=>[6,2,7,1,3,4,5]=>[7,6,1,3,4,2,5]=>{{1,3,4,5,7},{2,6}} [[1,2,4,5],[3,7],[6]]=>[6,3,7,1,2,4,5]=>[7,1,6,2,4,3,5]=>{{1,2,4,5,7},{3,6}} [[1,2,3,5],[4,7],[6]]=>[6,4,7,1,2,3,5]=>[7,1,2,6,3,4,5]=>{{1,2,3,5,7},{4,6}} [[1,2,3,4],[5,7],[6]]=>[6,5,7,1,2,3,4]=>[7,1,2,3,6,5,4]=>{{1,2,3,4,7},{5,6}} [[1,3,5,6],[2,4],[7]]=>[7,2,4,1,3,5,6]=>[4,7,1,2,3,5,6]=>{{1,2,3,4,5,6,7}} [[1,2,5,6],[3,4],[7]]=>[7,3,4,1,2,5,6]=>[4,1,7,3,2,5,6]=>{{1,2,3,4,5,6,7}} [[1,3,4,6],[2,5],[7]]=>[7,2,5,1,3,4,6]=>[5,7,1,3,2,4,6]=>{{1,2,3,4,5,6,7}} [[1,2,4,6],[3,5],[7]]=>[7,3,5,1,2,4,6]=>[5,1,7,2,3,4,6]=>{{1,2,3,4,5,6,7}} [[1,2,3,6],[4,5],[7]]=>[7,4,5,1,2,3,6]=>[5,1,2,7,4,3,6]=>{{1,2,3,4,5,6,7}} [[1,3,4,5],[2,6],[7]]=>[7,2,6,1,3,4,5]=>[6,7,1,3,4,2,5]=>{{1,2,3,4,5,6,7}} [[1,2,4,5],[3,6],[7]]=>[7,3,6,1,2,4,5]=>[6,1,7,2,4,3,5]=>{{1,2,3,4,5,6,7}} [[1,2,3,5],[4,6],[7]]=>[7,4,6,1,2,3,5]=>[6,1,2,7,3,4,5]=>{{1,2,3,4,5,6,7}} [[1,2,3,4],[5,6],[7]]=>[7,5,6,1,2,3,4]=>[6,1,2,3,7,5,4]=>{{1,2,3,4,5,6,7}} [[1,5,6,7],[2],[3],[4]]=>[4,3,2,1,5,6,7]=>[2,3,4,1,5,6,7]=>{{1,2,3,4},{5},{6},{7}} [[1,4,6,7],[2],[3],[5]]=>[5,3,2,1,4,6,7]=>[2,3,5,1,4,6,7]=>{{1,2,3,4,5},{6},{7}} [[1,3,6,7],[2],[4],[5]]=>[5,4,2,1,3,6,7]=>[2,4,1,5,3,6,7]=>{{1,2,3,4,5},{6},{7}} [[1,2,6,7],[3],[4],[5]]=>[5,4,3,1,2,6,7]=>[3,1,4,5,2,6,7]=>{{1,2,3,4,5},{6},{7}} [[1,4,5,7],[2],[3],[6]]=>[6,3,2,1,4,5,7]=>[2,3,6,1,4,5,7]=>{{1,2,3,4,5,6},{7}} [[1,3,5,7],[2],[4],[6]]=>[6,4,2,1,3,5,7]=>[2,4,1,6,3,5,7]=>{{1,2,3,4,5,6},{7}} [[1,2,5,7],[3],[4],[6]]=>[6,4,3,1,2,5,7]=>[3,1,4,6,2,5,7]=>{{1,2,3,4,5,6},{7}} [[1,3,4,7],[2],[5],[6]]=>[6,5,2,1,3,4,7]=>[2,5,1,3,6,4,7]=>{{1,2,3,4,5,6},{7}} [[1,2,4,7],[3],[5],[6]]=>[6,5,3,1,2,4,7]=>[3,1,5,2,6,4,7]=>{{1,2,3,4,5,6},{7}} [[1,2,3,7],[4],[5],[6]]=>[6,5,4,1,2,3,7]=>[4,1,2,5,6,3,7]=>{{1,2,3,4,5,6},{7}} [[1,4,5,6],[2],[3],[7]]=>[7,3,2,1,4,5,6]=>[2,3,7,1,4,5,6]=>{{1,2,3,4,5,6,7}} [[1,3,5,6],[2],[4],[7]]=>[7,4,2,1,3,5,6]=>[2,4,1,7,3,5,6]=>{{1,2,3,4,5,6,7}} [[1,2,5,6],[3],[4],[7]]=>[7,4,3,1,2,5,6]=>[3,1,4,7,2,5,6]=>{{1,2,3,4,5,6,7}} [[1,3,4,6],[2],[5],[7]]=>[7,5,2,1,3,4,6]=>[2,5,1,3,7,4,6]=>{{1,2,3,4,5,6,7}} [[1,2,4,6],[3],[5],[7]]=>[7,5,3,1,2,4,6]=>[3,1,5,2,7,4,6]=>{{1,2,3,4,5,6,7}} [[1,2,3,6],[4],[5],[7]]=>[7,5,4,1,2,3,6]=>[4,1,2,5,7,3,6]=>{{1,2,3,4,5,6,7}} [[1,3,4,5],[2],[6],[7]]=>[7,6,2,1,3,4,5]=>[2,6,1,3,4,7,5]=>{{1,2,3,4,5,6,7}} [[1,2,4,5],[3],[6],[7]]=>[7,6,3,1,2,4,5]=>[3,1,6,2,4,7,5]=>{{1,2,3,4,5,6,7}} [[1,2,3,5],[4],[6],[7]]=>[7,6,4,1,2,3,5]=>[4,1,2,6,3,7,5]=>{{1,2,3,4,5,6,7}} [[1,2,3,4],[5],[6],[7]]=>[7,6,5,1,2,3,4]=>[5,1,2,3,6,7,4]=>{{1,2,3,4,5,6,7}} [[1,4,6],[2,5,7],[3]]=>[3,2,5,7,1,4,6]=>[7,3,2,1,5,4,6]=>{{1,4,6,7},{2,3},{5}} [[1,3,6],[2,5,7],[4]]=>[4,2,5,7,1,3,6]=>[7,4,1,2,5,3,6]=>{{1,3,6,7},{2,4},{5}} [[1,2,6],[3,5,7],[4]]=>[4,3,5,7,1,2,6]=>[7,1,4,3,5,2,6]=>{{1,2,6,7},{3,4},{5}} [[1,3,6],[2,4,7],[5]]=>[5,2,4,7,1,3,6]=>[7,5,1,2,4,3,6]=>{{1,3,6,7},{2,4,5}} [[1,2,6],[3,4,7],[5]]=>[5,3,4,7,1,2,6]=>[7,1,5,3,4,2,6]=>{{1,2,6,7},{3,4,5}} [[1,4,5],[2,6,7],[3]]=>[3,2,6,7,1,4,5]=>[7,3,2,1,4,6,5]=>{{1,4,5,7},{2,3},{6}} [[1,3,5],[2,6,7],[4]]=>[4,2,6,7,1,3,5]=>[7,4,1,2,3,6,5]=>{{1,3,5,7},{2,4},{6}} [[1,2,5],[3,6,7],[4]]=>[4,3,6,7,1,2,5]=>[7,1,4,3,2,6,5]=>{{1,2,5,7},{3,4},{6}} [[1,3,4],[2,6,7],[5]]=>[5,2,6,7,1,3,4]=>[7,5,1,3,2,6,4]=>{{1,3,4,7},{2,5},{6}} [[1,2,4],[3,6,7],[5]]=>[5,3,6,7,1,2,4]=>[7,1,5,2,3,6,4]=>{{1,2,4,7},{3,5},{6}} [[1,2,3],[4,6,7],[5]]=>[5,4,6,7,1,2,3]=>[7,1,2,5,4,6,3]=>{{1,2,3,7},{4,5},{6}} [[1,3,5],[2,4,7],[6]]=>[6,2,4,7,1,3,5]=>[7,6,1,2,3,4,5]=>{{1,3,5,7},{2,4,6}} [[1,2,5],[3,4,7],[6]]=>[6,3,4,7,1,2,5]=>[7,1,6,3,2,4,5]=>{{1,2,5,7},{3,4,6}} [[1,3,4],[2,5,7],[6]]=>[6,2,5,7,1,3,4]=>[7,6,1,3,2,5,4]=>{{1,3,4,7},{2,5,6}} [[1,2,4],[3,5,7],[6]]=>[6,3,5,7,1,2,4]=>[7,1,6,2,3,5,4]=>{{1,2,4,7},{3,5,6}} [[1,2,3],[4,5,7],[6]]=>[6,4,5,7,1,2,3]=>[7,1,2,6,4,5,3]=>{{1,2,3,7},{4,5,6}} [[1,3,5],[2,4,6],[7]]=>[7,2,4,6,1,3,5]=>[6,7,1,2,3,4,5]=>{{1,2,3,4,5,6,7}} [[1,2,5],[3,4,6],[7]]=>[7,3,4,6,1,2,5]=>[6,1,7,3,2,4,5]=>{{1,2,3,4,5,6,7}} [[1,3,4],[2,5,6],[7]]=>[7,2,5,6,1,3,4]=>[6,7,1,3,2,5,4]=>{{1,2,3,4,5,6,7}} [[1,2,4],[3,5,6],[7]]=>[7,3,5,6,1,2,4]=>[6,1,7,2,3,5,4]=>{{1,2,3,4,5,6,7}} [[1,2,3],[4,5,6],[7]]=>[7,4,5,6,1,2,3]=>[6,1,2,7,4,5,3]=>{{1,2,3,4,5,6,7}} [[1,4,7],[2,5],[3,6]]=>[3,6,2,5,1,4,7]=>[5,6,3,1,2,4,7]=>{{1,2,4,5,6},{3},{7}} [[1,3,7],[2,5],[4,6]]=>[4,6,2,5,1,3,7]=>[5,6,1,4,2,3,7]=>{{1,2,3,5,6},{4},{7}} [[1,2,7],[3,5],[4,6]]=>[4,6,3,5,1,2,7]=>[5,1,6,4,3,2,7]=>{{1,2,3,5,6},{4},{7}} [[1,3,7],[2,4],[5,6]]=>[5,6,2,4,1,3,7]=>[4,6,1,2,5,3,7]=>{{1,2,3,4,6},{5},{7}} [[1,2,7],[3,4],[5,6]]=>[5,6,3,4,1,2,7]=>[4,1,6,3,5,2,7]=>{{1,2,3,4,6},{5},{7}} [[1,4,6],[2,5],[3,7]]=>[3,7,2,5,1,4,6]=>[5,7,3,1,2,4,6]=>{{1,2,4,5,6,7},{3}} [[1,3,6],[2,5],[4,7]]=>[4,7,2,5,1,3,6]=>[5,7,1,4,2,3,6]=>{{1,2,3,5,6,7},{4}} [[1,2,6],[3,5],[4,7]]=>[4,7,3,5,1,2,6]=>[5,1,7,4,3,2,6]=>{{1,2,3,5,6,7},{4}} [[1,3,6],[2,4],[5,7]]=>[5,7,2,4,1,3,6]=>[4,7,1,2,5,3,6]=>{{1,2,3,4,6,7},{5}} [[1,2,6],[3,4],[5,7]]=>[5,7,3,4,1,2,6]=>[4,1,7,3,5,2,6]=>{{1,2,3,4,6,7},{5}} [[1,4,5],[2,6],[3,7]]=>[3,7,2,6,1,4,5]=>[6,7,3,1,4,2,5]=>{{1,2,4,5,6,7},{3}} [[1,3,5],[2,6],[4,7]]=>[4,7,2,6,1,3,5]=>[6,7,1,4,3,2,5]=>{{1,2,3,5,6,7},{4}} [[1,2,5],[3,6],[4,7]]=>[4,7,3,6,1,2,5]=>[6,1,7,4,2,3,5]=>{{1,2,3,5,6,7},{4}} [[1,3,4],[2,6],[5,7]]=>[5,7,2,6,1,3,4]=>[6,7,1,3,5,2,4]=>{{1,2,3,4,6,7},{5}} [[1,2,4],[3,6],[5,7]]=>[5,7,3,6,1,2,4]=>[6,1,7,2,5,3,4]=>{{1,2,3,4,6,7},{5}} [[1,2,3],[4,6],[5,7]]=>[5,7,4,6,1,2,3]=>[6,1,2,7,5,4,3]=>{{1,2,3,4,6,7},{5}} [[1,3,5],[2,4],[6,7]]=>[6,7,2,4,1,3,5]=>[4,7,1,2,3,6,5]=>{{1,2,3,4,5,7},{6}} [[1,2,5],[3,4],[6,7]]=>[6,7,3,4,1,2,5]=>[4,1,7,3,2,6,5]=>{{1,2,3,4,5,7},{6}} [[1,3,4],[2,5],[6,7]]=>[6,7,2,5,1,3,4]=>[5,7,1,3,2,6,4]=>{{1,2,3,4,5,7},{6}} [[1,2,4],[3,5],[6,7]]=>[6,7,3,5,1,2,4]=>[5,1,7,2,3,6,4]=>{{1,2,3,4,5,7},{6}} [[1,2,3],[4,5],[6,7]]=>[6,7,4,5,1,2,3]=>[5,1,2,7,4,6,3]=>{{1,2,3,4,5,7},{6}} [[1,5,7],[2,6],[3],[4]]=>[4,3,2,6,1,5,7]=>[6,3,4,2,1,5,7]=>{{1,5,6},{2,3,4},{7}} [[1,4,7],[2,6],[3],[5]]=>[5,3,2,6,1,4,7]=>[6,3,5,1,2,4,7]=>{{1,4,6},{2,3,5},{7}} [[1,3,7],[2,6],[4],[5]]=>[5,4,2,6,1,3,7]=>[6,4,1,5,2,3,7]=>{{1,3,6},{2,4,5},{7}} [[1,2,7],[3,6],[4],[5]]=>[5,4,3,6,1,2,7]=>[6,1,4,5,3,2,7]=>{{1,2,6},{3,4,5},{7}} [[1,4,7],[2,5],[3],[6]]=>[6,3,2,5,1,4,7]=>[5,3,6,1,2,4,7]=>{{1,2,3,4,5,6},{7}} [[1,3,7],[2,5],[4],[6]]=>[6,4,2,5,1,3,7]=>[5,4,1,6,2,3,7]=>{{1,2,3,4,5,6},{7}} [[1,2,7],[3,5],[4],[6]]=>[6,4,3,5,1,2,7]=>[5,1,4,6,3,2,7]=>{{1,2,3,4,5,6},{7}} [[1,3,7],[2,4],[5],[6]]=>[6,5,2,4,1,3,7]=>[4,5,1,2,6,3,7]=>{{1,2,3,4,5,6},{7}} [[1,2,7],[3,4],[5],[6]]=>[6,5,3,4,1,2,7]=>[4,1,5,3,6,2,7]=>{{1,2,3,4,5,6},{7}} [[1,5,6],[2,7],[3],[4]]=>[4,3,2,7,1,5,6]=>[7,3,4,2,1,5,6]=>{{1,5,6,7},{2,3,4}} [[1,4,6],[2,7],[3],[5]]=>[5,3,2,7,1,4,6]=>[7,3,5,1,2,4,6]=>{{1,4,6,7},{2,3,5}} [[1,3,6],[2,7],[4],[5]]=>[5,4,2,7,1,3,6]=>[7,4,1,5,2,3,6]=>{{1,3,6,7},{2,4,5}} [[1,2,6],[3,7],[4],[5]]=>[5,4,3,7,1,2,6]=>[7,1,4,5,3,2,6]=>{{1,2,6,7},{3,4,5}} [[1,4,5],[2,7],[3],[6]]=>[6,3,2,7,1,4,5]=>[7,3,6,1,4,2,5]=>{{1,4,5,7},{2,3,6}} [[1,3,5],[2,7],[4],[6]]=>[6,4,2,7,1,3,5]=>[7,4,1,6,3,2,5]=>{{1,3,5,7},{2,4,6}} [[1,2,5],[3,7],[4],[6]]=>[6,4,3,7,1,2,5]=>[7,1,4,6,2,3,5]=>{{1,2,5,7},{3,4,6}} [[1,3,4],[2,7],[5],[6]]=>[6,5,2,7,1,3,4]=>[7,5,1,3,6,2,4]=>{{1,3,4,7},{2,5,6}} [[1,2,4],[3,7],[5],[6]]=>[6,5,3,7,1,2,4]=>[7,1,5,2,6,3,4]=>{{1,2,4,7},{3,5,6}} [[1,2,3],[4,7],[5],[6]]=>[6,5,4,7,1,2,3]=>[7,1,2,5,6,4,3]=>{{1,2,3,7},{4,5,6}} [[1,4,6],[2,5],[3],[7]]=>[7,3,2,5,1,4,6]=>[5,3,7,1,2,4,6]=>{{1,2,3,4,5,6,7}} [[1,3,6],[2,5],[4],[7]]=>[7,4,2,5,1,3,6]=>[5,4,1,7,2,3,6]=>{{1,2,3,4,5,6,7}} [[1,2,6],[3,5],[4],[7]]=>[7,4,3,5,1,2,6]=>[5,1,4,7,3,2,6]=>{{1,2,3,4,5,6,7}} [[1,3,6],[2,4],[5],[7]]=>[7,5,2,4,1,3,6]=>[4,5,1,2,7,3,6]=>{{1,2,3,4,5,6,7}} [[1,2,6],[3,4],[5],[7]]=>[7,5,3,4,1,2,6]=>[4,1,5,3,7,2,6]=>{{1,2,3,4,5,6,7}} [[1,4,5],[2,6],[3],[7]]=>[7,3,2,6,1,4,5]=>[6,3,7,1,4,2,5]=>{{1,2,3,4,5,6,7}} [[1,3,5],[2,6],[4],[7]]=>[7,4,2,6,1,3,5]=>[6,4,1,7,3,2,5]=>{{1,2,3,4,5,6,7}} [[1,2,5],[3,6],[4],[7]]=>[7,4,3,6,1,2,5]=>[6,1,4,7,2,3,5]=>{{1,2,3,4,5,6,7}} [[1,3,4],[2,6],[5],[7]]=>[7,5,2,6,1,3,4]=>[6,5,1,3,7,2,4]=>{{1,2,3,4,5,6,7}} [[1,2,4],[3,6],[5],[7]]=>[7,5,3,6,1,2,4]=>[6,1,5,2,7,3,4]=>{{1,2,3,4,5,6,7}} [[1,2,3],[4,6],[5],[7]]=>[7,5,4,6,1,2,3]=>[6,1,2,5,7,4,3]=>{{1,2,3,4,5,6,7}} [[1,3,5],[2,4],[6],[7]]=>[7,6,2,4,1,3,5]=>[4,6,1,2,3,7,5]=>{{1,2,3,4,5,6,7}} [[1,2,5],[3,4],[6],[7]]=>[7,6,3,4,1,2,5]=>[4,1,6,3,2,7,5]=>{{1,2,3,4,5,6,7}} [[1,3,4],[2,5],[6],[7]]=>[7,6,2,5,1,3,4]=>[5,6,1,3,2,7,4]=>{{1,2,3,4,5,6,7}} [[1,2,4],[3,5],[6],[7]]=>[7,6,3,5,1,2,4]=>[5,1,6,2,3,7,4]=>{{1,2,3,4,5,6,7}} [[1,2,3],[4,5],[6],[7]]=>[7,6,4,5,1,2,3]=>[5,1,2,6,4,7,3]=>{{1,2,3,4,5,6,7}} [[1,6,7],[2],[3],[4],[5]]=>[5,4,3,2,1,6,7]=>[2,3,4,5,1,6,7]=>{{1,2,3,4,5},{6},{7}} [[1,5,7],[2],[3],[4],[6]]=>[6,4,3,2,1,5,7]=>[2,3,4,6,1,5,7]=>{{1,2,3,4,5,6},{7}} [[1,4,7],[2],[3],[5],[6]]=>[6,5,3,2,1,4,7]=>[2,3,5,1,6,4,7]=>{{1,2,3,4,5,6},{7}} [[1,3,7],[2],[4],[5],[6]]=>[6,5,4,2,1,3,7]=>[2,4,1,5,6,3,7]=>{{1,2,3,4,5,6},{7}} [[1,2,7],[3],[4],[5],[6]]=>[6,5,4,3,1,2,7]=>[3,1,4,5,6,2,7]=>{{1,2,3,4,5,6},{7}} [[1,5,6],[2],[3],[4],[7]]=>[7,4,3,2,1,5,6]=>[2,3,4,7,1,5,6]=>{{1,2,3,4,5,6,7}} [[1,4,6],[2],[3],[5],[7]]=>[7,5,3,2,1,4,6]=>[2,3,5,1,7,4,6]=>{{1,2,3,4,5,6,7}} [[1,3,6],[2],[4],[5],[7]]=>[7,5,4,2,1,3,6]=>[2,4,1,5,7,3,6]=>{{1,2,3,4,5,6,7}} [[1,2,6],[3],[4],[5],[7]]=>[7,5,4,3,1,2,6]=>[3,1,4,5,7,2,6]=>{{1,2,3,4,5,6,7}} [[1,4,5],[2],[3],[6],[7]]=>[7,6,3,2,1,4,5]=>[2,3,6,1,4,7,5]=>{{1,2,3,4,5,6,7}} [[1,3,5],[2],[4],[6],[7]]=>[7,6,4,2,1,3,5]=>[2,4,1,6,3,7,5]=>{{1,2,3,4,5,6,7}} [[1,2,5],[3],[4],[6],[7]]=>[7,6,4,3,1,2,5]=>[3,1,4,6,2,7,5]=>{{1,2,3,4,5,6,7}} [[1,3,4],[2],[5],[6],[7]]=>[7,6,5,2,1,3,4]=>[2,5,1,3,6,7,4]=>{{1,2,3,4,5,6,7}} [[1,2,4],[3],[5],[6],[7]]=>[7,6,5,3,1,2,4]=>[3,1,5,2,6,7,4]=>{{1,2,3,4,5,6,7}} [[1,2,3],[4],[5],[6],[7]]=>[7,6,5,4,1,2,3]=>[4,1,2,5,6,7,3]=>{{1,2,3,4,5,6,7}} [[1,5],[2,6],[3,7],[4]]=>[4,3,7,2,6,1,5]=>[6,7,4,3,1,2,5]=>{{1,2,5,6,7},{3,4}} [[1,4],[2,6],[3,7],[5]]=>[5,3,7,2,6,1,4]=>[6,7,5,1,3,2,4]=>{{1,2,4,6,7},{3,5}} [[1,3],[2,6],[4,7],[5]]=>[5,4,7,2,6,1,3]=>[6,7,1,5,4,2,3]=>{{1,2,3,6,7},{4,5}} [[1,2],[3,6],[4,7],[5]]=>[5,4,7,3,6,1,2]=>[6,1,7,5,4,3,2]=>{{1,2,3,6,7},{4,5}} [[1,4],[2,5],[3,7],[6]]=>[6,3,7,2,5,1,4]=>[5,7,6,1,2,3,4]=>{{1,2,4,5,7},{3,6}} [[1,3],[2,5],[4,7],[6]]=>[6,4,7,2,5,1,3]=>[5,7,1,6,2,4,3]=>{{1,2,3,5,7},{4,6}} [[1,2],[3,5],[4,7],[6]]=>[6,4,7,3,5,1,2]=>[5,1,7,6,3,4,2]=>{{1,2,3,5,7},{4,6}} [[1,3],[2,4],[5,7],[6]]=>[6,5,7,2,4,1,3]=>[4,7,1,2,6,5,3]=>{{1,2,3,4,7},{5,6}} [[1,2],[3,4],[5,7],[6]]=>[6,5,7,3,4,1,2]=>[4,1,7,3,6,5,2]=>{{1,2,3,4,7},{5,6}} [[1,4],[2,5],[3,6],[7]]=>[7,3,6,2,5,1,4]=>[5,6,7,1,2,3,4]=>{{1,2,3,4,5,6,7}} [[1,3],[2,5],[4,6],[7]]=>[7,4,6,2,5,1,3]=>[5,6,1,7,2,4,3]=>{{1,2,3,4,5,6,7}} [[1,2],[3,5],[4,6],[7]]=>[7,4,6,3,5,1,2]=>[5,1,6,7,3,4,2]=>{{1,2,3,4,5,6,7}} [[1,3],[2,4],[5,6],[7]]=>[7,5,6,2,4,1,3]=>[4,6,1,2,7,5,3]=>{{1,2,3,4,5,6,7}} [[1,2],[3,4],[5,6],[7]]=>[7,5,6,3,4,1,2]=>[4,1,6,3,7,5,2]=>{{1,2,3,4,5,6,7}} [[1,6],[2,7],[3],[4],[5]]=>[5,4,3,2,7,1,6]=>[7,3,4,5,2,1,6]=>{{1,6,7},{2,3,4,5}} [[1,5],[2,7],[3],[4],[6]]=>[6,4,3,2,7,1,5]=>[7,3,4,6,1,2,5]=>{{1,5,7},{2,3,4,6}} [[1,4],[2,7],[3],[5],[6]]=>[6,5,3,2,7,1,4]=>[7,3,5,1,6,2,4]=>{{1,4,7},{2,3,5,6}} [[1,3],[2,7],[4],[5],[6]]=>[6,5,4,2,7,1,3]=>[7,4,1,5,6,2,3]=>{{1,3,7},{2,4,5,6}} [[1,2],[3,7],[4],[5],[6]]=>[6,5,4,3,7,1,2]=>[7,1,4,5,6,3,2]=>{{1,2,7},{3,4,5,6}} [[1,5],[2,6],[3],[4],[7]]=>[7,4,3,2,6,1,5]=>[6,3,4,7,1,2,5]=>{{1,2,3,4,5,6,7}} [[1,4],[2,6],[3],[5],[7]]=>[7,5,3,2,6,1,4]=>[6,3,5,1,7,2,4]=>{{1,2,3,4,5,6,7}} [[1,3],[2,6],[4],[5],[7]]=>[7,5,4,2,6,1,3]=>[6,4,1,5,7,2,3]=>{{1,2,3,4,5,6,7}} [[1,2],[3,6],[4],[5],[7]]=>[7,5,4,3,6,1,2]=>[6,1,4,5,7,3,2]=>{{1,2,3,4,5,6,7}} [[1,4],[2,5],[3],[6],[7]]=>[7,6,3,2,5,1,4]=>[5,3,6,1,2,7,4]=>{{1,2,3,4,5,6,7}} [[1,3],[2,5],[4],[6],[7]]=>[7,6,4,2,5,1,3]=>[5,4,1,6,2,7,3]=>{{1,2,3,4,5,6,7}} [[1,2],[3,5],[4],[6],[7]]=>[7,6,4,3,5,1,2]=>[5,1,4,6,3,7,2]=>{{1,2,3,4,5,6,7}} [[1,3],[2,4],[5],[6],[7]]=>[7,6,5,2,4,1,3]=>[4,5,1,2,6,7,3]=>{{1,2,3,4,5,6,7}} [[1,2],[3,4],[5],[6],[7]]=>[7,6,5,3,4,1,2]=>[4,1,5,3,6,7,2]=>{{1,2,3,4,5,6,7}} [[1,7],[2],[3],[4],[5],[6]]=>[6,5,4,3,2,1,7]=>[2,3,4,5,6,1,7]=>{{1,2,3,4,5,6},{7}} [[1,6],[2],[3],[4],[5],[7]]=>[7,5,4,3,2,1,6]=>[2,3,4,5,7,1,6]=>{{1,2,3,4,5,6,7}} [[1,5],[2],[3],[4],[6],[7]]=>[7,6,4,3,2,1,5]=>[2,3,4,6,1,7,5]=>{{1,2,3,4,5,6,7}} [[1,4],[2],[3],[5],[6],[7]]=>[7,6,5,3,2,1,4]=>[2,3,5,1,6,7,4]=>{{1,2,3,4,5,6,7}} [[1,3],[2],[4],[5],[6],[7]]=>[7,6,5,4,2,1,3]=>[2,4,1,5,6,7,3]=>{{1,2,3,4,5,6,7}} [[1,2],[3],[4],[5],[6],[7]]=>[7,6,5,4,3,1,2]=>[3,1,4,5,6,7,2]=>{{1,2,3,4,5,6,7}} [[1],[2],[3],[4],[5],[6],[7]]=>[7,6,5,4,3,2,1]=>[2,3,4,5,6,7,1]=>{{1,2,3,4,5,6,7}} [[1,2,3,4,5,6,7,8]]=>[1,2,3,4,5,6,7,8]=>[1,2,3,4,5,6,7,8]=>{{1},{2},{3},{4},{5},{6},{7},{8}}
reading word permutation
Return the permutation obtained by reading the entries of the tableau row by row, starting with the bottom-most row in English notation.
descent views to invisible inversion bottoms
Return a permutation whose multiset of invisible inversion bottoms is the multiset of descent views of the given permutation.
An invisible inversion of a permutation $\sigma$ is a pair $i < j$ such that $i < \sigma(j) < \sigma(i)$. The element $\sigma(j)$ is then an invisible inversion bottom.
A descent view in a permutation $\pi$ is an element $\pi(j)$ such that $\pi(i+1) < \pi(j) < \pi(i)$, and additionally the smallest element in the decreasing run containing $\pi(i)$ is smaller than the smallest element in the decreasing run containing $\pi(j)$.
This map is a bijection $\chi:\mathfrak S_n \to \mathfrak S_n$, such that
the multiset of descent views in $\pi$ is the multiset of invisible inversion bottoms in $\chi(\pi)$,
the set of left-to-right maximima of $\pi$ is the set of maximal elements in the cycles of $\chi(\pi)$,
the set of global ascent of $\pi$ is the set of global ascent of $\chi(\pi)$,
the set of maximal elements in the decreasing runs of $\pi$ is the set of deficiency positions of $\chi(\pi)$, and
the set of minimal elements in the decreasing runs of $\pi$ is the set of deficiency values of $\chi(\pi)$. | CommonCrawl |
Why can Quadrature Demodulation demod a Frequency Modulated Signal?
I use a Quadrature Demodulator in my SDR application, which is defined as:
$\angle (S_n, S_{n+1})=arctan(S_{n+1}*\overline{S_{n}})$
So practically its amplitude is the angle between two Samples $S_n$ and $S_{n+1}$ where $n$ refers to a position in a sequence of complex values of I and Q. $\overline{S}$ be the complex conjugate.
I understand that for Quadrature based modulation the phase-changes contain the information. But why can I demodulate frequency-modulated Signals with a Quadrature Demodulator. This modulator is often referred to as FM Demod. Can somebody explain me why that is?
Best, Marius
frequency software-implementation demodulation
MariusMarius
I don't think your formula is quite right.
The two complex samples are the locations (at two successive sampling instants) of the tip of the rotating phasor that represents the analog signal. The information that you need is the angle between phasor positions at these two successive time instants. If $S_{n+1} = r_{n+1}e^{j\theta_{n+1}}$ and $S_n = r_ne^{j\theta_n}$, then you want the value of $\theta_{n+1}-\theta_n$. Now, $S_{n+1}S_n^* = r_{n+1}r_ne^{j(\theta_{n+1}-\theta_n)}$ and if you express these complex numbers in rectangular coordinates, then you can get the desired angle as $$\theta_{n+1}-\theta_n = \arctan\left(\frac{\text{Im}(S_{n+1}S_n^*)}{\text{Re}(S_{n+1}S_n^*)}\right) = \arctan\left(\frac{\text{Re}(S_{n})\text{Im}(S_{n+1}) - \text{Im}(S_n)\text{Re}(S_{n+1})}{\text{Re}(S_{n})\text{Re}(S_{n+1}) + \text{Im}(S_{n})\text{Im}(S_{n+1})}\right)$$ though usually the four-quadrant arctangent function atan2(y,x) is used instead of atan which returns an answer between $-\pi/2$ and $\pi/2$.
In digital communications applications such as DQPSK demodulation, the real interest is not in the actual value of the angle, but in which of the four intervals $(-\pi/4,\pi/4)$, $(\pi/4, 3\pi/4)$, $(3\pi/4, 5\pi/4)$, and $(5\pi/4, 7\pi/4)$ the angle belongs -- in other words, did the phase not change at all, or change by $\pi/2$, or by $\pi$, or by $3\pi/2$ -- and this can be readily determined by comparing the values of the numerator and denominator in the argument of the arctangent above, thus saving computational effort and speeding up the demodulation process. But the more general formula gives the actual angle, and thus can be used to demodulate a general FM signal as well. The output will be a sequence of samples of the original continuous-time signal that modulated the carrier and thus formed the signal that was transmitted. D/A conversion will give the reconstructed signal.
Dilip SarwateDilip Sarwate
Not the answer you're looking for? Browse other questions tagged frequency software-implementation demodulation or ask your own question.
Understanding I and Q modulation
Demodulation of a BFSK frequency hopping signal using FFT
Sampling frequency of modulated signal
In-phase and Quadrature demodulation
Receiving frequency modulated signal
Downconversion / Demodulation of RF signal to DC
Quadrature demodulation, homodyne detection, lock-in detection - what's the difference?
Term for signal waveform correction or demodulation
What is the BER for multilevel modulated signal | CommonCrawl |
Modularity of genes involved in local adaptation to climate despite physical linkage
Katie E Lotterhos ORCID: orcid.org/0000-0001-7529-27711,
Sam Yeaman2,
Jon Degner3,
Sally Aitken3 &
Kathryn A Hodgins4
Genome Biology volume 19, Article number: 157 (2018) Cite this article
Linkage among genes experiencing different selection pressures can make natural selection less efficient. Theory predicts that when local adaptation is driven by complex and non-covarying stresses, increased linkage is favored for alleles with similar pleiotropic effects, with increased recombination favored among alleles with contrasting pleiotropic effects. Here, we introduce a framework to test these predictions with a co-association network analysis, which clusters loci based on differing associations. We use this framework to study the genetic architecture of local adaptation to climate in lodgepole pine, Pinus contorta, based on associations with environments.
We identify many clusters of candidate genes and SNPs associated with distinct environments, including aspects of aridity and freezing, and discover low recombination rates among some candidate genes in different clusters. Only a few genes contain SNPs with effects on more than one distinct aspect of climate. There is limited correspondence between co-association networks and gene regulatory networks. We further show how associations with environmental principal components can lead to misinterpretation. Finally, simulations illustrate both benefits and caveats of co-association networks.
Our results support the prediction that different selection pressures favor the evolution of distinct groups of genes, each associating with a different aspect of climate. But our results went against the prediction that loci experiencing different sources of selection would have high recombination among them. These results give new insight into evolutionary debates about the extent of modularity, pleiotropy, and linkage in the evolution of genetic architectures.
Pleiotropy and linkage are fundamental aspects of genetic architecture [1]. Pleiotropy is when a gene has effects on multiple distinct traits. Pleiotropy may hinder the rate of adaptation by increasing the likelihood that genetic changes have a deleterious effect on at least one trait [2, 3]. Similarly, linkage among genes experiencing different kinds of selection can facilitate or hinder adaptation [4,5,6]. Despite progress in understanding the underlying pleiotropic nature of phenotypes and the influence of pleiotropy on the rate of adaptation to specific conditions [7], we have an incomplete understanding of the extent and magnitude of linkage and pleiotropy in the local adaptation of natural populations to the landscapes and environments in which they are found.
Here, we aim to characterize the genetic architecture of adaptation to the environment, including the number of separate components of the environment in which a gene affects fitness (a form of "selectional pleiotropy," Table 1) [8]. Genetic architecture is an encompassing term used to describe the pattern of genetic features that build and control a trait, and includes statements about the number of genes or alleles involved, their arrangement on chromosomes, the distribution of their effects, and patterns of pleiotropy (Table 1). We can measure many parameters to characterize environments (e.g., temperature, latitude, precipitation), but the variables we define may not correspond to the environmental factors that matter for an organism's fitness. A major hurdle in understanding how environments shape fitness is defining the environment based on factors that drive selection and local adaptation and not by the intrinsic attributes of the organism or by the environmental variables we happen to measure.
Table 1 Overview of terminology used in the literature regarding pleiotropy and modularity
Table 2 Environmental variables measured for each sampling location, ordered by their abbreviations shown in Fig. 2a, b
In local adaptation to climate, an allele that has different effects on fitness at different extremes of an environmental variable (e.g., positive effects on fitness in cold environments and negative effects in warm environments, often called "antagonistic pleiotropy," Table 1 [9]) will evolve to produce a clinal relationship between the allele frequency and that environmental factor [10,11,12,13,14,15]. While associations between allele frequencies and environmental factors have been well characterized across many taxa [16], whether genes affect fitness in multiple distinct aspects of the environment, which we call "environmental pleiotropy" (e.g., has effects on fitness in both cold and dry environments, Table 1), has not been well characterized [17]. This is because of conceptual issues that arise from defining environments along the univariate axes that we measure. For example, "cold" and "dry" might be a single selective optimum ("cold-dry") to which a gene adapts [7], but these two axes are typically analyzed separately. Moreover, climate variables such as temperature and precipitation may be highly correlated across landscapes, and this correlation structure makes inferring pleiotropy from signals of selection to climate difficult. Indeed, in their study of climate adaptation in Arabidopsis, Hancock et al. [17] noticed that candidate loci showed signals of selection in multiple environmental variables, potentially indicating pleiotropic effects. However, they also found that a substantial proportion of this overlap was due to correlations among climate variables on the landscape, and as a result, they were unable to fully describe pleiotropic effects.
Because of the conceptual issues described above, certain aspects of the genetic architecture of adaptation to landscapes have not been well characterized, particularly the patterns of linkage among genes adapting to distinct environmental factors, and the degree of pleiotropic effects of genes on fitness in distinct environments. These aspects of genetic architecture are important to characterize, in order to test the theoretical predictions described below, and to inform the considerable debate about whether organisms have a modular organization of gene effects on phenotypes or fitness components, versus universal effects of genes on all phenotypes or fitness components (Fig. 1a, compare left to right column) [18,19,20,21,22,23,24].
Conceptual framework for evaluating the modularity and pleiotropy of genetic architectures adapting to the environment. In this example, each gene (identified by numbers) contains two causal SNPs (identified by letters) where mutations affect fitness in potentially different aspects of the environment. The two aspects of the environment that affect fitness are aridity and freezing. a The true underlying genetic architecture adapting to multiple aspects of climate. The left column represents a modular genetic architecture in which any pleiotropic effects of genes are limited to a particular aspect of the environment. The right column represents a non-modular architecture, in which genes have pleiotropic effects on multiple aspects of the environment. Universal pleiotropy occurs when a gene has effects on all the multiple distinct aspects of the environment. Genes in this example are unlinked in the genome, but linkage among genes is an important aspect of the environmental response architecture. b Hierarchical clustering is used to identify the "co-association modules," which jointly describe the groups of loci that adapt to a distinct aspects of climate as well as the distinct aspects of climate to which they adapt. In the left column, the "aridity module" is a group of SNPs within two unlinked genes adapting to aridity, and SNPs within these genes show associations with both temperature and climate-moisture deficit. In the right column, note how the aridity module is composed of SNPs from all four unlinked genes. c Co-association networks are used to visualize the results of the hierarchical clustering with regards to the environment, and connections are based on similarity in SNPs in their associations with environments. In both columns, all SNPs within a module (network) all have similar associations with multiple environmental variables. d Pleiotropy barplots are used to visualize the results of the hierarchical clustering with regards to the genetic architecture, represented by the proportion of SNPs in each candidate gene that affects different aspects of the environment (as defined by the co-association module)
Modular genetic architectures are characterized by extensive pleiotropic effects among elements within a module, and a suppression of pleiotropic effects between different modules [25]. Note that modularity in this study refers to similarity in the effects of loci on fitness and not necessarily to the physical location of loci on chromosomes or to participation in the same gene regulatory network. Theory predicts that modular genetic architectures will be favored when genomes face complex spatial and temporal environments [26] or when multiple traits are under a combination of directional and stabilizing selection (because modularity allows adaptation to take place in one trait without undoing the adaptation achieved by another trait) [25, 27]. Adaptation to climate on a landscape fits these criteria because environmental variation among populations is complex—with multiple abiotic and biotic challenges occurring at different spatial scales—and traits are thought to be under stabilizing selection within populations but directional selection among populations [28].
Clusters of physically linked loci subject to the same selective environment, as well as a lack of physical linkage among loci subject to different selection pressures, are expected based on theory. When mutations are subject to the same selection pressure, recombination can bring variants with similar effects together and allow evolution to proceed faster [29]. Clusters of adaptive loci can also arise through genomic rearrangements that bring existing mutations together [30] or because new causal mutations linked to adaptive alleles have an increased establishment probability [31]. Similarly, clusters of locally adaptive loci are expected to evolve in regions of low recombination, such as inversions, because of the reduced gene flow these regions experience [32, 33]. In general, these linked clusters of adaptive loci are favored over evolutionary time because low recombination rates increase the rate at which they are inherited together. Conversely, selection will also act to disfavor linkage and increase recombination rates between genes adapting to different selection pressures [34,35,36]. Thus, genes adapting to different selection pressures would be unlikely to be physically linked or to have low recombination rates between them. In practice, issues can arise in inference because physical linkage will cause correlated responses to selection in neutral loci flanking a causal locus. Large regions of the genome can share similar patterns of association to a given environmental factor, such that many loci within a given candidate region are probably not causally responding to selection. Conversely, if linked genes are associated with completely different aspects of the selective environment, this is unlikely to arise by chance.
In summary, current analytical techniques have given limited insight into the genetic architectures of adaptation to environmental variation across natural landscapes. Characterizing the different aspects of the environment that act on genomes is difficult because measured variables are univariate and may not be representative of selection from the perspective of the organism and because of spatial correlations among environmental variables. Even when many variables are summarized with ordination such as principal components, the axes that explain the most variation in physical environment do not necessarily correspond to the axes that cause selection because the components are orthogonal [37]. Furthermore, the statistical methods widely used for inferring adaptation to climate are also univariate in the sense that they test for significant correlations between the frequency of a single allele and a single environmental variable (e.g., [38, 39, 40]). While some multivariate regression methods like redundancy analysis have been used to understand how multiple environmental factors shape genetic structure [41, 42], they still rely on ordination and have not been used to identify distinct evolutionary modules of loci.
Here, we aim to fill this gap by presenting a framework for characterizing the genetic architecture of adaptation to the environment, through the joint inference of modules of loci that associate with distinct environmental factors that we call "co-association modules" (Table 1, Fig. 1), as well as the distinct factors of the environment to which they associate. Using this framework, we can characterize some aspects of genetic architecture, including modularity and linkage, that have not been well studied in the adaptation of genomes to environments. We tested the hypotheses that (i) the genetic architecture of adaptation to complex environments is modular and (ii) that loci in different modules have evolved over time to be unlinked in the genome.
The framework is illustrated in Fig. 1 for four hypothetical genes adapted to two distinct aspects of climate (freezing and aridity). In this figure, we compare the patterns expected for (i) a modular architecture (left column, where pleiotropic fitness effects of a gene are limited to one particular climatic factor) to (ii) a highly environmentally pleiotropic architecture (right column, where genes have pleiotropic effects on adaptation to distinct climatic factors). Candidate SNPs are first identified by the significance of the univariate associations between allele frequency and the measured environmental variables, evaluated against what would be expected by neutrality. Then, hierarchical clustering of candidate SNP allele associations with environments is used to identify co-association modules (Fig. 1b) [43,44,45]. These modules can be visualized with a co-association network analysis, which identifies groups of loci that may covary with one environmental variable but covary in different ways with another, revealing patterns that are not evident through univariate analysis (Fig. 1c). By defining the distinct aspects of the selectional environment (Table 1) for each module through their environmental associations, we can infer pleiotropic effects of genes through the associations their SNPs have with distinct selective environmental factors (Fig. 1d). In this approach, the genetic effects of loci on different traits under selection are unknown, and we assume that each aspect of the multivariate environment selects for a trait or suite of traits that can be inferred by connecting candidate loci directly to the environmental factors that select for particular allelic combinations.
We apply this new approach to characterize the genetic architecture of local adaptation to climate in lodgepole pine (Pinus contorta) using a previously published exome capture dataset [46,47,48] from trees that inhabit a wide range of environments across their range, including freezing temperatures, precipitation, and aridity [49,50,51,52]. Lodgepole pine is a coniferous species inhabiting a wide range of environments in northwestern North America and exhibits isolation by distance population structure across the range [46]. Previous work based on reciprocal transplants and common garden experiments has shown extensive local adaptation [46, 53, 54]. We recently used this dataset to study convergent adaptation to freezing between lodgepole pine and the interior spruce complex (Picea glauca x Picea engelmannii) [46,47,48]. However, the comparative approach was limited to discovering parallel patterns between species and did not examine selective factors unique to one species. As in most other systems, the genomic architecture in pine underlying local adaptation to the multivariate environment has not been well characterized, and our reanalysis yields several new biological insights overlooked by the comparative approach.
We assessed the benefits and caveats of this new framework by comparing it with other multivariate approaches (based on principal components) and by evaluating it with simulated data. The evaluation with simulations yielded several important insights, including the importance of using strict criteria to exclude loci with false positive associations with environments. Thus, a key starting point for inferring co-association modules is a good set of candidate SNPs for adaptation. We developed this candidate set by first identifying top candidate genes for local adaptation (from a previously published set of genes that contained more outliers for genotype-environment associations and genotype-phenotype associations than expected by chance, [46]). We then identified "top candidate" SNPs within these top candidate genes as those whose allele frequencies were associated with at least one environmental variable above that expected by neutrality (using a criterion that excluded false positives in the simulated data described below). To this set of top candidate SNPs, we applied the framework outlined in Fig. 1 to characterize environmental modularity and linkage of the genetic architecture. The power of our dataset comes from including a large number of populations inhabiting diverse environments (> 250), the accurate characterization of climate for each individual with 22 environmental variables, a high-quality exome capture dataset representing more than 500,000 single-nucleotide polymorphisms (SNPs) in ~ 29,000 genes [46,47,48], a mapping population that allows us to study recombination rates among genes, and an outgroup species that allowed us to determine the derived allele for most candidate SNPs. When such data is available, we find that this framework is useful for characterizing the environmental modularity and linkage relationships among candidate genes for local adaptation to multivariate environments.
Top candidate genes and top candidates SNPs
The study of environmental pleiotropy and modularity is relevant only to loci under selection. Our "top candidate" approach identified a total of 108 top candidate genes out of a total of 29,920 genes. These contigs contained 801 top candidate SNPs (out of 585,270 exome SNPs) that were strongly associated with at least one environmental variable and were likely either causal or tightly linked to a causal locus. This set of top candidate SNPs was enriched for XTX outliers (Additional file 1: Figure S1; XTX is an analog of FST that measures differentiation in allele frequencies across populations). To elucidate patterns of multivariate association, we applied the framework described in Fig. 1 to these 801 top candidate SNPs.
Co-association modules
Hierarchical clustering and co-association network analysis of top candidate SNPs revealed a large number of co-association modules, each of which contained SNPs from one or more genes. Each co-association module is represented by one or more top candidate SNPs (represented by nodes) that are connected by edges. The edges are drawn between two SNPs if they have similar associations with the environment below a distance threshold. The distance threshold was determined by simulation as a number that enriched connections among selected loci adapting to the same environmental variable and also decreased the number of connections to false positive loci (see the Results section "Simulated datasets").
For the purposes of illustration, we classified SNPs into four main groups, each with several co-association modules, according to the kinds of environmental variables they were most strongly associated with: Aridity, Freezing, Geography, and an assorted group we bin as "Multi" (Fig. 2a, b). Note that while we could have chosen a different number of groups, this would not have changed the underlying clustering of the SNPs revealed by co-association networks that are relevant to modularity (Fig. 2b–f). This division of data into groups was necessary to produce coherent visual network plots and to make data analyses more computationally efficient (we found when there were more than ~ 20,000 edges in the data, computation and plotting of the network were not feasible with the package). Note that SNPs in different groups are more dissimilar to SNPs in other groups than to those in the same group (based on the threshold we used to determine edges) and would not be connected by edges in a co-association module. Interestingly, this clustering by association signatures does not closely parallel the correlation structure among environmental variables themselves. For example, continentality (TD), degree days below 0 °C (DD_0), and latitude (LAT) are all relatively strongly correlated (> 0.5), while the "Freezing" SNPs are associated with continentality and degree days below 0, but not latitude (Fig. 2a, b).
Co-association modules for Pinus contorta. a Correlations among environments measured by Spearman's ⍴ plotted according to hierarchical clustering of environments. Abbreviations of the environmental variables can be found in Table 2. Note the general categories on the left side of the heatmap. b Hierarchical clustering of the absolute value of associations between allele frequencies (of SNPs in columns) and environments (in rows) measured by Spearman's ⍴. c–f Each co-association network represents a distinct co-association module, with color schemes according to the four major groups in the data. Each node is a SNP and is labeled with a number according to its exome contig, and a color according to its module—with the exceptions that modules containing a single SNP all give the same color within a major group. Numbers next to each module indicate the number of distinct genes involved (with the exception of the Geography group, where only modules with five or more genes are labeled). g The pleiotropy barplot, where each bar corresponds to a gene, and the colors represent the proportion of SNPs in each co-association module. Note that gene IDs are ordered by their co-association module, and the color of contig-IDs along the x axis is determined by the co-association module that the majority of SNPs in that contig cluster with. Contigs previously identified as undergoing convergent evolution with spruce by Yeaman et al. [46] are indicated with an asterisk. Abbreviations: Temp, temperature; Precip, precipitation; freq, frequency
The co-association modules are shown in Fig. 2c–f. Each connected network of SNPs can be considered a group of loci that shows associations with a distinct environmental factor. The "Multi" group stands for multiple environments because these SNPs showed associations with 19 to 21 of the 22 environmental variables. This group consisted of 60 top candidate SNPs across just three genes, and undirected graph networks revealed two co-association modules within this group (Fig. 2c, Additional file 1: Figure S2). The "Aridity" group consisted of 282 SNPs across 28 genes and showed associations with climate-moisture deficit, annual heat:moisture index, mean summer precipitation, and temperature variables excluding those that were frost-related (Fig. 2b). All these SNPs were very similar in their patterns of association and grouped into a single co-association module (Fig. 2d, Additional file 1: Figure S3). The "Freezing" group consisted of 176 SNPs across 21 genes and showed associations with freezing variables including number of degree days below 0 °C, mean coldest month temperature, and variables related to frost occurrence (Fig. 2b). SNPs from eight of the genes in this group formed a single module (gene no. 35–42), with the remaining SNPs mainly clustering by gene (Fig. 2e, Additional file 1: Figure S4). The final group, "Geography," consisted of 282 SNPs across 28 genes that showed consistent associations with the geographical variables elevation and longitude, but variable associations with other climate variables (Fig. 2b). This group consisted of several co-association modules containing one to nine genes (Fig. 2f, Additional file 1: Figure S5). Network analysis using population-structure-corrected associations between allele frequency and the environmental variables resulted in broadly similar patterns; although the magnitude of the correlations was reduced (Additional file 1: Figure S6, note that neutral genetic structure was controlled for in choosing top candidates).
The pleiotropy barplot is visualized in Fig. 2g, where each gene is listed along the x axis, the bar color indicates the co-association module, and the bar height indicates the number of SNPs clustering with that module. If each co-association module associates with a distinct aspect of the multivariate environment, then genes whose SNPs associate with different co-association modules (e.g., genes with different colors in their bars in Fig. 2g) might be considered to be environmentally pleiotropic. However, conceptual issues remain in inferring the extent of pleiotropy, because co-association modules within the Geography group, for instance, will be more similar to each other in their associations with environments than between a module in the Geography group and a module in the Multi group. For this reason, we are only inferring that our results are evidence of environmental pleiotropy when genes have SNPs in at least two of the four major groups in the data. For instance, gene no. 1, for which the majority of SNPs cluster with the Multi group, also has eight SNPs that cluster with the Freezing group (although they are not located in co-association modules with any genes defined by Freezing). In the Aridity group, gene no. 11 has three SNPs that also cluster with the Geography group (although they are not located in co-association modules with any genes defined by Geography). In the Freezing group, some genes located within the same co-association module (no. 35–40) also have SNPs that cluster with another module in the Geography group (with gene nos. 75–76; these are not physically linked to gene nos. 35–37, see below). Whether or not these are "true" instances of environmental pleiotropy remains to be determined by experiments. For the most part, however, the large majority of SNPs located within genes are in the same co-association module, or in modules located within one of the four main groups, so environmental pleiotropy at the gene level appears to be generally quite limited.
Statistical and physical linkage disequilibrium
To determine if grouping of SNPs into co-association modules corresponded to associations driven by statistical associations among genes measured by linkage disequilibrium (LD), we calculated mean LD among all SNPs in the top candidate genes (as the correlation in allele frequencies). We found that the co-association modules captured patterns of LD among the genes through their common associations with environmental variables (Additional file 1: Figure S7). There was higher than average LD within the co-association modules of the Multi, Aridity, and Freezing groups, and very low LD between the Aridity group and the other groups (Additional file 1: Figure S7). The LD among the other three groups (Multi, Freezing, and Geography) was small, but higher with each other than with Aridity. Thus, the co-association clustering corresponded to what we would expect based on LD among genes, with the important additional benefit of linking LD clusters to likely environmental drivers of selection.
The high LD observed within the four main environmental modules could arise via selection by the same factor of the multivariate environment, or via physical linkage on the chromosome, or both. We used a mapping population to disentangle these two hypotheses, by calculating recombination rates among the top candidate genes (see the Methods section "Recombination rates"). Of the 108 top candidate genes, 66 had SNPs that were represented in our mapping population. The recombination data revealed that all the genes in the Aridity group were in strong LD and physically linked (Fig. 3). Within the other three groups, we found physical proximity for only a few genes, typically within the same co-association module (but note that our mapping analysis does not have high power to infer recombination rate when loci are physically unlinked; see the "Methods" section). For example, a few co-association modules in the Geography group (comprised of gene nos. 53–54, no. 60–63, or no. 75–76) had very low recombination rates among them. Of the three genes forming the largest co-association module in the Freezing group that was represented in our mapping panel (no. 35–37), two were physically linked.
Comparison of linkage disequilibrium (lower diagonal) and recombination rates (upper diagonal) for exome contigs. Only contigs with SNPs in the mapping panel are shown. Rows and column labels correspond to Fig. 2g. Darker areas represent either high physical linkage (low recombination) or high linkage disequilibrium (measured by the square of the correlation coefficient)
Strikingly, low recombination rates were estimated between some genes belonging to different co-association modules across the four main groups, even though there was little LD among SNPs in these genes (Fig. 3). This included a block of loci with low recombination comprised of genes from all four groups: eight genes from the Aridity co-association module, one gene from the large module in the Multi group, two genes from different co-association modules in the Freezing group, and seven genes from different co-association modules in the Geography group (upper diagonal of Fig. 3, see Additional file 1: Figure S8 for a reorganization of the recombination data and more intuitive visualization).
Comparison to conclusions based on principal components of environments
We compared the results from the co-association network analysis to associations with principal components (PC) of the environmental variables. Briefly, all environmental variables were input into a PC analysis, and associations between allele frequencies and PC axes were analyzed. We used the same criteria (log10 BF > 2 in Bayenv2) to determine if a locus was significant and compared (i) overlap with top candidate SNPs based on outliers from univariate associations with environments and (ii) interpretation of the selective environment based on loadings of environments to PC axes. The first three PC axes explained 44% (PC1), 22% (PC2), and 15% (PC3) of the variance in environments (80% total). Loadings of environment variables onto PC axes are shown in Additional file 1: Figure S9. A large proportion of top candidate SNPs in our study would not have been found if we had first done a PCA on the environments and then looked for outliers along PC axes: overall, 80% of the geography SNPs, 75% of the Freezing SNPs, 20% of the Aridity SNPs, and 10% of the Multi SNPs were not outliers along the first 10 PC axes and would have been missed.
Next, we evaluated whether interpretation of selective environments based on PCs was consistent with that based on associations with individual environmental factors. Some of the temperature and frost variables (MAT, mean annual temperature; EMT, extreme minimum temperature; DD0, degree days below 0 °C; DD5, degree days above 5 °C; bFFP, begin frost-free period; FFP, frost-free period; eFFP, end frost-free period; labels in Fig. 2a) had the highest loadings for PC1 (Additional file 1: Figure S9). Almost all of the SNPs in the Multi group (90%) and 19% of SNPs in the Freezing group were outliers along this axis (Additional file 1: Figure S10, note green outliers along x axis from the Multi group; less than 2% of candidate SNPs in the other groups were outliers). For PC1, interpretation of the selective environment (e.g., MAT, DD0, FFP, eFFP, DD5) is partly consistent with the co-association network analysis. It was consistent because both Multi SNPs and Freezing SNPs show associations with all these variables (Fig. 2b). However, it was inconsistent because the Multi SNPs and Freezing SNPs had strong associations with other variables (e.g., Multi SNPs showed strong associations with latitude, and Freezing SNPs showed strong associations with longitude, Fig. 2b) that did not load strongly onto this axis, and so these putative environmental drivers would have been missed in an interpretation based on associations with principal components.
Many precipitation and aridity variables loaded strongly onto PC2, including mean annual precipitation, annual heat:moisture index, climate-moisture deficit, and precipitation as snow (Additional file 1: Figure S9). However, few top candidate SNPs were outliers along the PC2 axis: only 13% of Freezing SNPs, 10% of Aridity SNPs, and less than 3% of Multi or Geography SNPs were outliers (Additional file 1: Figure S10A, note lack of outliers on y axis).
For PC3, latitude, elevation, and two frost variables (beginning frost-free period and frost-free period) had the highest loadings (Additional file 1: Figure S9). The majority (78%) of the Aridity SNPs were outliers with PC3 (Additional file 1: Figure S10B, note outliers as orange dots on y axis). Based on the PC association, this would lead one to conclude that the Aridity SNPs show associations with latitude, elevation, and frost-free period. While the Aridity SNPs do have strong associations with latitude (the fifth row in Fig. 2b), they show very weak associations with the beginning of frost-free period, elevation, and frost-free period length (the third, fourth, and last rows in Fig. 2b, respectively). Thus, interpretation of the environmental drivers of selection based on associations with PC3 would have been very different from the univariate associations.
Interpretation of multivariate allele associations
While the network visualization gave insight into patterns of LD among loci, it does not give insight into patterns of allele frequency change on the landscape, relative to the ancestral state. As illustrated above, principal components would not be useful for this latter visualization. Instead, we accomplished this by plotting the association of a derived allele with one environmental variable against the association of that allele with a second environmental variable. Note that when the two environmental variables themselves are correlated on the landscape, an allele with a larger association in one environment will also have a larger association with a second environment, regardless of whether or not selection is shaping those associations. We can visualize (i) the expected genome-wide covariance (given correlations between environmental variables; Fig. 2a) using shading of quadrants and (ii) the observed genome-wide covariance using a 95% prediction ellipse (Fig. 4). Since alleles were coded according to their putative ancestral state in loblolly pine (Pinus taeda), the location of any particular SNP in the plot represents the bivariate environment in which the derived allele is found in higher frequency than the ancestral allele (Fig. 4). Visualizing the data in this way allows us to understand the underlying correlation structure of the data, as well as to develop testable hypotheses about the true selective environment and the fitness of the derived allele relative to the ancestral allele.
Overview of galaxy biplots. The association between allele frequency and one variable is plotted against the association between allele frequency and a second variable. The Spearman's ρ correlation between the two variables (mean annual temperature or MAT and mean annual precipitation or MAP in this example) is shown in the lower right corner. When the two variables are correlated, genome-wide covariance is expected to occur in the direction of their association (shown with quadrant shading in light gray). The observed genome-wide distribution of allelic effects is plotted in dark gray, and the 95% prediction ellipse is plotted as a black line. Because derived alleles were coded as 1 and ancestral alleles were coded as 0, the location of any particular SNP in bivariate space represents the type of environment that the derived allele is found in higher frequency, whereas the location of the ancestral allele would be a reflection through the origin (note only derived alleles are plotted)
We overlaid the top candidate SNPs, colored according to their grouping in the co-association network analysis, on top of this genome-wide pattern (for the 668 of 801 top candidate SNPs for which the derived allele could be determined). We call these plots "galaxy biplots" because of the characteristic patterns we observed when visualizing data this way (Fig. 5). Galaxy biplots revealed that SNPs in the Aridity group showed associations with hot/dry versus cold/wet environments (red points in Fig. 5a), while SNPs in the Multi and Freezing groups showed patterns of associations with hot/wet versus cold/dry environments (blue and green dots in Fig. 5a). These outlier patterns became visually stronger for some SNPs and environments after correcting associations for population structure (compare Fig. 5a–b, structure-corrected allele frequencies calculated with Bayenv2, see the "Methods"). Most SNPs in the Freezing group showed associations with elevation but not latitude (compare height of blue points on y axis of Fig. 5c–e). Conversely, the large co-association module in the Multi group (gene no. 1, dark green points) showed associations with latitude but not elevation, while the second co-association module in the Multi group (gene nos. 2–3, light green points) showed associations with both latitude and elevation (compare height of points on y axis of Fig. 5c–e). Note how the structure correction polarized these patterns somewhat without changing interpretation, suggesting that the structure-corrected allelic associations become more extreme when their pattern of allele frequency contrasted the background population structure (compare left column of Fig. 5 to right column of Fig. 5).
Galaxy biplots for different environmental variables for regular associations (left column) and structure-corrected associations (right column). Top candidate SNPs are highlighted against the genome-wide background. The correlation shown in the lower right corner represents Spearman's ρ between the two environmental variables on the landscape. The internal color of each point corresponds to its co-association module (as shown in Fig. 2c–f). Top row: mean annual temperature (MAT) vs. mean annual precipitation (MAP), middle row: MAT and elevation, bottom row: MAT and latitude (LAT)
Some modules were particularly defined by the fact that almost all the derived alleles changed frequency in the same direction (e.g., sweep-like signatures). For instance, for the co-association module in the Multi group defined by gene nos. 2–3, 14, of the 16 derived SNPs were found in higher frequencies at colder temperatures, higher elevations, and higher latitudes. Contrast this with a group of SNPs from an co-association module in the Freezing group defined by gene no. 32, in which 14 of 15 derived SNPs were found in higher frequencies in warmer temperatures and lower elevations, but showed no associations with latitude. These may be candidates for genotypes that have risen in frequency to adapt to particular environmental conditions on the landscape.
Conversely, other modules showed different combinations of derived alleles that arose in frequency at opposite values of environmental variables. For instance, derived alleles in the Aridity co-association module were found in higher frequency in either warm, dry environments (88 of 155 SNPs) or in cold, moist environments (67 of 155 SNPs). Similarly, for the Multi co-association module defined by gene no. 1, derived alleles were found in higher frequency in either cold, dry environments (15 of 37 SNPs), or in warm, moist environments (22 of 37 SNPs). These may be candidates for genes acted on by antagonistic pleiotropy within a locus (Table 1), in which one genotype is selected for at one extreme of the environment and another genotype is selected for at the other extreme of the environment. Unfortunately, we were unable to fully characterize the relative abundance of sweep-like vs. antagonistically pleiotropic patterns across all top candidate genes due to (i) the low number of candidate SNPs for most genes, and (ii) for many SNPs, the derived allele could not be determined (because there was a SNP or missing data in the ancestral species).
We also visualized the patterns of allele frequency on the landscape for two representative SNPs, chosen because they had the highest number of connections in their co-association module (and were more likely to be true positives, see the Results section "Simulated datasets"). Geographic and climatic patterns are illustrated with maps for two such SNPs: (i) a SNP in the Multi co-association module with significant associations with latitude and mean annual temperature (Fig. 6a, gene no. 1 from Fig. 2) and (ii) a SNP in the Aridity co-association module with significant associations with annual heat:moisture index and latitude (Fig. 6b, gene no. 8 from Fig. 2). These maps illustrate the complex environments that may be selecting for particular combinations of genotypes despite potentially high gene flow in this widespread species.
Pie charts representing the frequency of derived candidate alleles across the landscape. Allele frequency pie charts are overlain on top of an environment that the SNP shows significant associations with. The environment for each population is shown by the color of the outline around the pie chart. a Allele frequency pattern for a SNP from contig 1 in the Multi cluster from Fig. 2. The derived allele had negative associations with temperature but positive associations with latitude. b Allele frequency pattern for a SNP from contig 8 in the Aridity cluster. The derived allele had negative associations with annual:heat moisture index (and other measures of aridity) and positive associations with latitude. SNPs were chosen as those with the highest degree in their co-association module
Candidate gene annotations
Although many of the candidate genes were not annotated, as is typical for conifers, the genes underlying adaptation to these environmental gradients had diverse putative functions. The top candidate SNPs were found in 3′ and 5′ untranslated regions and open reading frames in higher proportions than all exome SNPs (Additional file 1: Figure S11). A gene ontology (GO) analysis using previously assigned gene annotations [46, 55] found that a single molecular function, solute:cation antiporter activity, was over-represented across all top candidate genes (Additional file 2: Table S1). In the Aridity and Geography groups, annotated genes included sodium or potassium ion antiporters (one in Aridity, a KEA4 homolog, and two in Geography, NHX8 and SOS1 homologs), suggestive of a role in drought, salt or freezing tolerance [56]. Genes putatively involved in auxin biosynthesis were also identified in the Aridity (YUCCA 3) and Geography (Anthranilate synthase component) groups (Additional file 3: Table S2), suggestive of a role in plant growth. In the Freezing and Geography groups, several flowering time genes were identified [57] including a homolog of CONSTANS [58] in the Freezing group and a homolog of FY, which affects FCA mRNA processing, in the Geography group [58] (Additional file 3: Table S2). In addition, several putative drought/stress response genes were identified, such as DREB transcription factor [59] and an RCD1-like gene (Additional file 3: Table S2). RCD-1 is implicated in hormonal signaling and in the regulation of several stress-responsive genes in Arabidopsis thaliana [57]. In the Multi group, the only gene that was annotated functions in acclimation of photosynthesis to the environment in A. thaliana [60].
Of the 47 candidate genes identified by Yeaman et al. [46] as undergoing convergent evolution for adaptation to low temperatures in lodgepole pine and the interior spruce hybrid complex (Picea glauca, P. engelmannii, and their hybrids), 10 were retained with our stringent criteria for top candidates. All of these genes grouped into the Freezing and Geography groups (shown by an asterisk in Fig. 2g): the two groups that had many SNPs with significant associations with elevation. This is consistent with the pattern of local adaptation in the interior spruce hybrid zone, whereby Engelmann spruce is adapted to higher elevations and white spruce is adapted to lower elevations [61].
Comparison of co-expression clusters to co-association modules
To further explore if co-association modules have similar gene functions, we examined their gene expression patterns in response to climate treatments using previously published RNAseq data of 10,714 differentially expressed genes that formed eight distinct co-expression clusters [55]. Of the 108 top candidate genes, 48 (44%) were also differentially expressed among treatments in response to factorial combinations of temperature (cold, mild, or hot), moisture (wet vs. dry), and/or day length (short vs. long day length). We found limited correspondence between co-association modules and co-expression clusters. Most of the top candidate genes that were differentially expressed mapped to two of the ten co-expression clusters previously characterized by [55] (Fig. 7, blue circles are the P2 co-expression cluster and green triangles are the P7 co-expression cluster previously described by [55]). Genes in the P2 co-expression cluster had functions associated with the regulation of transcription and their expression was strongly influenced by all treatments, while genes in the P7 co-expression cluster had functions relating to metabolism, photosynthesis, and response to stimulus [55]. Genes from the closely linked Aridity group mapped to four distinct co-expression clusters, contigs from the Freezing group mapped to three distinct co-expression clusters, and genes from the Geography group mapped to three distinct co-expression clusters.
Co-association modules mapped to co-expression clusters determined by climate treatments. Gene ID, color, and order shown on the bottom correspond to co-association modules plotted in Fig. 2. Co-expression clusters from [55] are shown at the top
We used a Fisher exact test to determine if any co-expression cluster was over-represented in any of the four major co-association groups shown in Fig. 2. We found that the Freezing group was over-represented in the P2 co-regulated gene expression cluster (P < 0.05) with seven (58%) of the Freezing genes found within the P2 expression cluster, revealing coordinated expression in response to climatic conditions. Homologs of four of the seven genes were present in A. thaliana, and three of these genes were transcription factors involved in abiotic stress response (DREB transcription factor), flowering time (CONSTANS, pseudo-response regulator) or the circadian clock (pseudo-response regulator 9). No other significant over-representation of gene expression class was identified for the four association groups or for all adaptation candidate genes.
Simulated datasets
We used individual-based simulations to examine potential limitations of the co-association network analysis by comparing the connectedness of co-association networks arising from false positive neutral loci vs. a combination of false positive neutral loci and true positive loci that had experienced selection to an unmeasured environmental factor. Specifically, we used simulations with random sampling designs from three replicates across three demographic histories: (i) isolation by distance at equilibrium (IBD), (ii) non-equilibrium range expansion from a single refugium (1R), or from (iii) two refugia (2R). These landscape simulations were similar to lodgepole pine in the sense that they simulated large effective population sizes and resulted in similar FST across the landscape as that observed in pine ([62, 63], FST in simulations ~ 0.05, vs. FST in pine ~ 0.016 [46]). To explore how the allele frequencies that evolved in these simulations might yield spurious patterns under the co-association network analysis, we overlaid the 22 environmental variables used in the lodgepole pine dataset onto the landscape genomic simulations [62, 63]. To simulate selection to an unmeasured environmental factor, a small proportion of SNPs (1%) were subjected to computer-generated spatially varying selection along a weak latitudinal cline [62, 63]. We assumed that 22 environmental variables were measured, but not the "true" selective environment; our analysis thus represents the ability of co-association networks to correctly cluster selected loci even when the true selective environment was unmeasured, but a number of other environmental variables were measured (correlations between the selective environment and the other variables ranged from 0 to 0.2). Note that the simulations differ from the empirical data in at least two ways: (i) there is only one selective environment (so we can evaluate whether a single selective environment could result in multiple co-association modules in the data given the correlation structure of observed environments) and (ii) loci were unlinked.
The P value and Bayes factor criteria for choosing top candidate SNPs in the empirical data produced no false positives with the simulated datasets (Additional file 1: Figure S12 right column), although using these criteria also reduced the proportion of true positives. Therefore, we used less stringent criteria to analyze the simulations so that we could also better understand patterns created by unlinked, false positive neutral loci (Additional file 1: Figure S12 left column).
We found that loci under selection by the same environmental factor generally formed a single tightly connected co-association module even though they were unlinked and that the degree of connectedness of selected loci was greater than among neutral loci (Fig. 8). Thus, a single co-association module typically resulted from adaptation to the single selective environment in the simulations. This occurred because the distance threshold used to define connections in the co-association modules was chosen as one that enriched for connections among selected loci with non-random associations in allele frequencies due to selection by a common environmental factor (Additional file 1: Figure S13).
Comparison of co-association networks resulting from simulated data for three demographic scenarios. a Isolation by distance (IBD), b range expansion from a single refugium (1R), and c range expansion from two refugia (2R). All SNPs were simulated unlinked and 1% of SNPs were simulated under selection to an unmeasured weak latitudinal cline. Boxplots of degree of connectedness of a SNP as a function of its strength of selection, across all replicate simulations (top row). Examples of networks formed by datasets that were neutral-only (middle row) or neutral+selected (bottom row) outlier loci
The propensity of neutral loci to form tightly clustered co-association networks increased with the complexity of demographic history (compare Fig. 8 IBD in the left column to 2R in the right column). For example, the false positive neutral loci from the two-refugia (2R) model formed tightly connected networks, despite the fact that all simulated loci were unlinked. This occurred because of non-random associations in allele frequency due to a shared demographic history. In some cases, selected loci formed separate or semi-separate modules according to their strengths of selection, but the underlying patterns of association were the same (e.g., Figure 8a, Additional file 1: Figure S14).
Co-association networks provide a valuable framework for interpreting the genetic architecture of local adaptation to the environment in lodgepole pine. Our most interesting result was the discovery of low recombination rates among genes putatively adapting to different and distinct aspects of climate, which was unexpected because selection is predicted to increase recombination between loci acted on by different sources of selection. If the loci we studied were true causal loci, then different sources of selection were strong enough to reduce LD among physically linked loci in the genome, resulting in modular effects of loci on fitness in the environment. While the top candidate SNPs from most genes had associations with only a single environmental factor, for some genes, we discovered evidence of environmental pleiotropy, i.e., candidate SNPs associated with multiple distinct aspects of climate. Within co-association modules, we observed a combination of local sweep-like signatures (in which derived alleles at a locus were all found in a particular climate, e.g., cold environments) and antagonistically pleiotropic patterns underlying adaptation to climate (in which some derived alleles at a locus were found at one environmental extreme and others found at the opposite extreme), although we could not evaluate the relative importance of these patterns. Finally, we observed that the modularity of candidate genes in their transcriptionally plastic responses to climate factors did not correspond to the modularity of these genes in their patterns of association with climate, as evidenced through comparing co-association networks with co-expression networks. These results give insight into evolutionary debates about the extent of modularity and pleiotropy in the evolution of genetic architecture [18,19,20,21,22,23,24].
Genetic architecture of adaptation: pleiotropy and modularity
Most of the top candidate genes in our analysis do not exhibit universal pleiotropy to distinct aspects of climate as defined by the expected pattern outlined in Fig. 1b. Our results are more consistent with the Hypothesis of Modular Pleiotropy [19], in which loci may have extensive effects within a distinct aspect of the environment (as defined by the variables that associate with each co-association module), but few pleiotropic effects among distinct aspects of the environment. These results are in line with theoretical predictions that modular architectures should be favored when there are many sources of selection in complex environments [26]. But note also that if many pleiotropic effects are weak, the stringent statistical thresholds used in our study to reduce false positives may also reduce the extent to which pleiotropy is inferred [20, 21]. Therefore in our study, any pleiotropic effects of genes on fitness detected in multiple aspects of climate are likely to be large effects, and we refrain to making any claims as to the extent of environmental pleiotropy across the entire genome.
The extent of pleiotropy within individual co-association modules is hard to quantify, as for any given module, we observed associations between genes and several environmental variables. Associations between a SNP and multiple environmental variables may or may not be interpreted as extensive environmental pleiotropic effects, depending on whether univariate environmental variables are considered distinct climatic factors or collectively represent a single multivariate optimum. In many cases, these patterns are certainly affected by correlations among the environmental variables themselves.
Our results also highlight conceptual issues with the definition of and interpretation of pleiotropic effects on distinct aspects of fitness from real data: namely, what constitutes a "distinct aspect" (be it among traits, components of fitness, or aspects of the environment)? In this study, we defined the selective environment through the perspective of those environmental variables we tested for associations with SNPs, using a threshold that produced reasonable results in simulation. But even with this definition, some co-association modules are more similar in their multivariate environmental "niche" than others. For instance, genes within the Geography group could be interpreted to have extensive pleiotropic effects if the patterns of associations of each individual module were taken to be "distinct," or they may be considered to have less extensive pleiotropic effects if their patterns of associations were too similar to be considered "distinct." While the framework we present here is a step toward understanding and visualizing this hierarchical nature of "distinct aspects" of environmental factors, a more formal framework is needed to quantify the distinctness of pleiotropic effects.
Genetic architecture of adaptation: linkage
We also observed physical linkage among genes that were associated with very distinct aspects of climate. This was somewhat unexpected from a theoretical perspective: while selection pressures due to genome organization may be weak, if anything, selection would be expected to disfavor linkage and increase recombination between genes adapting to selection pressures with different spatial patterns of variation [34,35,36]. Interestingly, while the recombination rate analysis suggests that these loci are sometimes located relatively close together on a single chromosome, this does not seem to be sufficient physical linkage to also cause a noticeable increase in LD. In other words, it is possible that the amount of physical linkage sometimes observed between genes in different co-association modules is not strong enough to constrain adaptation to these differing gradients. Genetic maps and reference genomes are not yet well developed for the large genomes of conifers; improved genetic maps or assembled genomes will be required to explore these questions in greater depth. If this finding is robust and not compromised by false positives, physical linkage among genes adapting to different climatic factors could either facilitate or hinder a rapid evolutionary response as the multivariate environment changes [4, 5].
Within co-association modules, we observed varying patterns of physical linkage among genes. The Aridity group, in particular, consisted of several tightly linked genes that may have arisen for a number of different reasons. Clusters of physically linked genes such as this may act as a single large-effect QTL [64] and may have evolved due to competition among alleles or genomic rearrangements ([30], although these are rare in conifers), increased establishment probability due to linked adaptive alleles [4], or divergence within inversions [32]. Alternatively, if the Aridity region was one of low recombination, a single causal variant could create the appearance of linked selection [65], a widespread false positive signal may have arisen due to genomic variation such as background selection and increased drift [66,67,68], or a widespread false signal may have arisen due to a demographic process such as allele surfing [69, 70].
Genetic architecture of adaptation: modularity of transcriptional plasticity vs. fitness
We also compared co-expression networks to co-association networks. Genes that showed similar responses in expression in lodgepole pine seedlings in response to experimental climatic treatments form a co-expression network. Since co-expression networks have been successful at identifying genes that respond the same way to environmental stimuli [71], it might be reasonable to expect that if these genes were adapting to climate they would also show similar patterns of associations with climate variables. However, differential expression analyses only identify genes with plastic transcriptional responses to climate. Plasticity is not a prerequisite for adaptation and may be an alternative strategy to adaptation. This is illustrated by our result that only half of our top candidate contigs for adaptation to climate were differentially expressed in response to climate conditions.
Interestingly, loci located within the same co-association module (groups of loci that are putatively favored or linked to loci putatively favored by natural selection) could be found in different co-expression clusters. For example, we observed that loci from the tightly linked Aridity module had many distinct expression patterns in response to climate treatments. Conversely, candidate genes that were associated with different aspects of the multivariate environment (because they were located in different co-association modules) could nonetheless be co-expressed in response to specific conditions. These observations support the speculation that the developmental/functional modularity of plasticity may not correspond to the modularity of the genotype to fitness map; however, the power of the analysis could be low due to stringent statistical cutoffs and these patterns warrant further investigation.
Physiological adaptation of lodgepole pine to climate
It is challenging to disentangle the physiological effects and importance of freezing versus drought in the local adaptation of conifers to climate. We found distinct groups of candidate genes along an axis of warm/wet to cold/dry (co-association modules in the Freezing and Multi groups), and another distinct group along an axis of cold/wet to warm/dry (the Aridity co-association module). Selection by drought conditions in winter may occur through extensive physiological remodeling that allows cells to survive intercellular freezing by desiccating protoplasts—but also results in drought stress at the cellular level [55]. Another type of winter drought injury in lodgepole pine—red belt syndrome—is caused by warm, often windy events in winter, when foliage desiccates but the ground is too cold for roots to be able to supply water above ground [72]. This may contrast with drought selection in summer, when available soil water is lowest and aridity highest. The physiological and cellular mechanisms of drought and freezing response have similarities but also potentially important differences that could be responsible for the patterns we have observed.
Our results provide a framework for developing hypotheses that will help to disentangle selective environments and provide genotypes for assisted gene flow in reforestation [73]. While climate change is expected to increase average temperatures across this region, some areas are experiencing more precipitation than historic levels and others experiencing less [74]. Tree mortality rates are increasing across North America due to increased drought and vapor pressure deficit for tree species including lodgepole pine, and associated increased vulnerability to damaging insects, but growth rates are also increasing with warming temperatures and increased carbon dioxide [75, 76]. Hot, dry valleys in southern BC are projected to have novel climates emerge that have no existing analogues in North America [77]. The considerable standing adaptive variation we observe here involving many genes could facilitate adaptation to new temperature and moisture regimes, or could hinder adaptation if novel climates are at odds with the physical linkage among alleles adapted to different climate stressors.
Limitations of associations with principal components
For these data, testing associations of genes with PC-based climate variables would have led to a very limited interpretation of the environmental drivers of selection because the PC ordination is not biologically informed as to what factors are driving divergent selection [37]. First, many putative candidates in the Freezing and Geography groups would have been missed. Second, strong associations between the Multi SNPs and environmental variables that did not load strongly onto PC1, such as latitude, would have also been missed. Finally, many Aridity SNPs were significantly associated in PC3, which was a PC axis that had strong correlations with environmental variables that the Aridity SNPs did not have any significant associations with. This occurred because no single environmental variable loaded strongly onto PC3 (the maximum loading of any single variable was 0.38) and many variables had moderate loadings, such that no single variable explained the majority of the variance (the maximum variance explained by any one variable was 15%). Thus, associations with higher PC axes become increasingly difficult to interpret when the axis itself explains less variance of the multivariate environment and the environmental factors loading onto that axis explain similar amounts of variance in that axis. While principal components will capture the environmental factors that covary the most, this may have nothing to do with the combinations that drive divergent selection and local adaptation. This needlessly adds a layer of complexity to an analysis that may not reveal anything biologically important. In contrast, co-association networks highlight those combinations of environments that are biologically important for those genes likely involved in local adaptation.
Benefits and caveats of co-association networks
Co-association networks provide an intuitive and visual framework for understanding patterns of associations of genes and SNPs across many potentially correlated environmental variables. By parsing loci into different groups based on their associations with multiple variables, this framework offers a more informative approach than grouping loci according to their outlier status based on associations with single environmental variables. While in this study we have used them to infer groups of loci that adapt to distinct aspects of the multivariate environment, co-association networks could be widely applied to a variety of situations, including genotype-phenotype associations. They offer the benefit of jointly identifying modules of loci and the groups of environmental variables that the modules are associated with. While the field may still have some disagreement about how modularity and pleiotropy should be defined, measured, and interpreted [19,20,21, 23, 24], co-association networks at least provide a quantitative framework to define and visualize modularity.
Co-association networks differ from the application of bipartite network theory for estimating the degree of classical pleiotropic effects of genes on traits [3]. Bipartite networks are two-level networks where the genes form one type of nodes and the traits form the second type of nodes, then a connection is drawn from a gene to a trait if there is a significant association [3]. The degree of pleiotropy of a locus is then inferred by the number of traits that a gene is connected to. With the bipartite network approach, trait nodes are defined by those traits measured, and not necessarily the multivariate effects from the perspective of the gene (e.g., a gene that affects organism size will have effects on height, weight, and several other variables, and if all these traits are analyzed, this gene would be inferred to have large pleiotropic effects). Even if highly correlated traits are removed, simulations have shown that even mild correlations in mutational effects can bias estimates of pleiotropy from bipartite networks [20, 21]. The advantage of co-association networks is their ability to identify combinations of variables (be they traits or environments) that associate with genetic (or SNP) modules. Correlated variables that measure essentially the same environment or phenotype will simply cluster together in a module, which can facilitate interpretation. On the other hand, correlated variables that measure different aspects of the environment or phenotype may cluster into different modules (as we observed in this study). The observed combinations of associations can then be used to develop and test hypotheses as to whether the genotype-environment combination represents a single multivariate environment that the gene is adapting to (in the case of allele associations with environment or fitness) or a single multivariate trait that the gene affects (in the case of allele associations with phenotypes). This approach may complement other machine-learning approaches based on multivariate associations with environments [78], which is a promising avenue for future research.
While co-association networks hold promise for elucidating the modularity and pleiotropy of the genotype-phenotype-fitness map, some caveats should be noted. First, correlations among variables will make it difficult to infer the exact conditions that select for or the exact traits that associate with particular allelic combinations. Results from this framework can make it easier, however, to generate hypotheses that can be tested with future experiments. Second, the analysis of simulated data shows that investigators should consider demographic history and choose candidates with caution for data analysis to exclude false positives, as we have attempted here. Co-association networks can arise among unlinked neutral loci by chance, and it is almost certain that some proportion of the "top candidate SNPs" in this study are false positives due to linkage with causal SNPs or due to demographic history. The simulated data also showed, however, that causal SNPs tend to have a higher degree of connection in their co-association network than neutral loci, and this might help to prioritize SNPs for follow up experiments, SNP arrays, and genome editing. Third, it may be difficult to draw conclusions about the level of modularity of the genetic architecture. The number of modules may be sensitive to the statistical thresholds used to identify top candidate SNPs [20, 21] as well as the distance threshold used to identify modules. With our data, the number of co-associations modules and the number of SNPs per module were not very sensitive to increasing this threshold by 0.05, but our results were sensitive to decreasing the threshold 0.05 (a stricter threshold resulted in smaller modules of SNPs with extremely similar associations, and a large number of "modules" comprised of a single SNP unconnected to other SNPs-even SNPs in the same gene) (results not shown). While inferred modules composed of a single SNP could be interpreted as unique, our simulations also show that neutral loci are more likely to be unconnected in co-association networks. Many alleles of small effect may be just below statistical detection thresholds, and whether or not these alleles are included could profoundly change inference as to the extent of pleiotropy [20, 21]. This presents a conundrum common to most population genomic approaches for detecting selection, because lowering statistical thresholds will almost certainly increase the number of false positives, while only using very stringent statistical thresholds may decrease the probability of observing pleiotropy if many pleiotropic effects are weak [20]. Thus, while co-association networks are useful for identifying SNP modules associated with correlated variables, further work is necessary to expand this framework to quantitatively measure pleiotropic effects in genomes.
In this study, we discovered physical linkage among loci putatively adapting to different aspects of the climate. These results give rare insight into both the ecological pressures that favor the evolution of modules by natural selection [19] and into the organization of genetic architecture itself. As climate changes, the evolutionary response will be determined by the extent of physical linkage among these loci, in combination with the strength of selection and phenotypic optima across environmental gradients, the scale and pattern of environmental variation, and the details of migration and demographic fluctuations across the landscape. While theory has made strides to provide a framework for predicting the genetic architecture of local adaptation under divergence with gene flow to a single environment [4, 30, 31, 79,80,81,82,83], as well as the evolution of correlated traits under different directions and/or strengths of selection when those traits have a common genetic basis [35, 36], how genetic architectures evolve on complex heterogeneous landscapes has not been clearly elucidated. Furthermore, it has been difficult to test theory because the field still lacks frameworks for evaluating empirical observations of adaptation in many dimensions. Here, we have attempted to develop an initial framework for understanding adaptation to several complex environments with different spatial patterns, which may also be useful for understanding the genetic basis of multivariate phenotypes from genome-wide association studies. This framework lays the foundation for future studies to examine modularity across the genotype-phenotype-fitness continuum.
Sampling and climate
This study uses the same dataset analyzed by Yeaman et al. [46], but with a different focus as explained in the introduction. Briefly, we obtained seeds from 281 sampling locations of lodgepole pine (Pinus contorta) from reforestation collections for natural populations, and these locations were selected to represent the full range of climatic and ecological conditions within the species range in British Columbia and Alberta based on ecosystem delineations. Seeds were grown in a common garden and 2–4 individuals were sampled from each sampling location. The environment for each sampling location was characterized by estimating climate normals for 1961–1990 from geographic coordinates using the software package ClimateWNA [84]. The program extracts and downscales the moderate spatial resolution generated by PRISM [85] to scale-free and calculates many climate variables for specific locations based on latitude, longitude, and elevation. The downscaling is achieved through a combination of bilinear interpolation and dynamic local elevational adjustment. We obtained 19 climatic and three geographical variables (latitude, longitude, and elevation). Geographic variables may correlate with some unmeasured environmental variables that present selective pressure to populations (e.g., latitude correlates with day length). Many of these variables were correlated with each other on the landscape (Fig. 2a).
Sequencing, bioinformatics, and annotation
The methods for this section are identical to those reported in [46]. Briefly, DNA from frozen needle tissue was purified using a Macherey-Nagel Nucleospin 96 Plant II Core kit automated on an Eppendorf EpMotion 5075 liquid handling platform. One microgram of DNA from each individual tree was made into a barcoded library with a 350 bp insert size using the BioO NEXTflex Pre-Capture Combo kit. Six individually barcoded libraries were pooled together in equal amounts before sequence capture. The capture was performed using custom Nimblegen SeqCap probes ([46] for more details, see [47]) and the resulting captured fragments were amplified using the protocol and reagents from the NEXTflex kit. All sample preparation steps followed the recommended protocols provided. After capture, each pool of six libraries was combined with another completed capture pool and the 12 individually barcoded samples were then sequenced, 100-bp paired-end, on one lane of an Illumina HiSeq 2500 (at the McGill University and Genome Quebec Innovation Centre).
Sequenced reads were filtered and aligned to the loblolly pine genome [86] using bwa mem [87] and variants were called using GATK Unified Genotyper [88], with steps included for removal of PCR duplicates, realignment around indels, and base quality score recalibration [46, 88]. SNP calls were filtered to eliminate variants that did not meet the following cutoffs: quality score > = 20, map quality score > = 45, FisherStrand score < = 33, HaplotypeScore < = 7, MQRankSumTest < = − 12.5, ReadPosRankSum > − 8, and allele balance < 2.2, minor allele frequency > 5%, and genotyped successfully in > 10% of individuals. Ancestral alleles were coded as a 0 and derived alleles coded as a 1 for data analysis.
We used the annotations developed for pine in [46]. Briefly, we performed a BLASTX search against the TAIR 10 protein database and identified the top blast hit for each transcript contig (e value cut-off was 10−6). We also performed a BLASTX against the nr (non-redundant) database screened for green plants and used Blast2GO [89] to assign GO terms and enzyme codes ([46] for details, see [55]). We also assigned GO terms to each contig based on the GO A. thaliana mappings and removed redundant GO terms. To identify if genes with particular molecular function and biological processes were over-represented in top candidate genes, we performed a GO enrichment analysis using topGO [90]. All GO terms associated with at least two candidate genes were analyzed for significant over-representation within each group and in all candidate genes (FDR 5%).
Top candidate SNPs
First, top candidate genes were obtained from [46]. For this study, genes with unusually strong signatures of association from multiple association tests (uncorrected genotype-phenotype and genotype-environment correlations, for details see [46]) were identified as those with more outlier SNPs than expected by chance with a probability of P < 10−9, which is a very restrictive cutoff (note that due to non-independence among SNPs in the same contig, this P value is an index, and not an exact probability). Thus, the subsequent analysis is limited to loci that we have the highest confidence are associated with adaptation as evidenced by a large number of significant SNPs (not necessarily the loci with the largest effect sizes).
For this study, we identified top candidate SNPs within the set of top candidate genes. These "top candidate SNPs" had allele-environment associations with (i) P values lower than the Bonferroni cutoff for the uncorrected Spearman's ρ (~ 10−8 = 0.05/(number of SNPs times the number of environmental variables) and (ii) log10(BF) > 2 for the structure-corrected Spearman's ρ (Bayenv2, for details see below). The resulting set of candidate SNPs rejects the null hypothesis of no association with the environment with high confidence. In subsequent analyses, we interpret the results both before and after correction for population structure, to ensure that structure correction does not change our overall conclusions. Note that because candidate SNPs are limited to the top candidate genes in order to reduce false positives in the analysis, these restrictive cutoffs may miss many true positives.
For uncorrected associations between allele frequencies and environments, we calculated the non-parametric rank correlation Spearman's ρ between allele frequency for each SNP and each environmental variable. For structure-corrected associations between allele frequencies and environments, we used the program Bayenv2 [39]. Bayenv2 is implemented in two steps. In the first step, the variance-covariance matrix is calculated from allelic data. As detailed in [46], a set of non-coding SNPs was used to calculate the variance-covariance matrix from the final run of the MCMC after 100,000 iterations, with the final matrix averaged over three MCMC runs. In the second step, the variance-covariance matrix is used to control for evolutionary history in the calculation of test statistics for each SNP. For each SNP, Bayenv2 outputs a Bayes factor (a value that measures the strength of evidence in favor of a linear relationship between allele frequencies and the environment after population structure is controlled for) and Spearman's ρ (the non-parametric correlation between allele frequencies and environment variables after population structure is controlled for). Previous authors have found that the stability of Bayes factors is sensitive to the number of iterations in the MCMC [91]. We ran three replicate chains of the MCMC with 50,000 iterations, which we found produced stable results. Bayes factors and structure-corrected Spearman's ρ were averaged over these three replicate chains, and these values were used for analysis.
Co-association networks
We first organized the associations into a matrix with SNPs in columns, environments in rows, and the specific SNP-environment association in each cell. These data were used to calculate pairwise Euclidean distances between SNPs based on their associations, and this distance matrix was used to cluster SNP loci with Ward's hierarchical clustering using the hclust function in the R package stats [92]. As described in the results, this resulted in four main groups in the data. For each of these main groups, we used undirected graph networks to visualize submodules of SNPs. Nodes (SNPs) were connected by edges if they had a pairwise Euclidean distance less than 0.1 from the distance matrix described above. We found that the results were not very sensitive to this distance threshold. Co-association networks were visualized using the igraph package in R v 1.0.1 [93].
Linkage disequilibrium
Linkage disequilibrium was calculated among pairwise combinations of SNPs within genes. Mean values of Pearson's correlation coefficient squared (r2) were estimated across all SNPs annotated to each pair of individual genes, excluding SNPs genotyped in fewer than 250 individuals (to minimize the contribution of small sample sizes to the calculation of gene-level means).
Recombination rates
An Affymetrix SNP array was used to genotype 95 full-sib offspring from a single cross of two parents. Individuals with genotype posterior probabilities of > 0.001 were filtered out. This array yielded data for 13,544 SNPs with mapping-informative genotypes. We used the package "onemap" in R with default settings to estimate recombination rates among pairs of loci, retaining all estimates with LOD scores > 3 [94]. This dataset contained 2760 pairs of SNPs that were found together on the same genomic contig, separated by a maximum distance of 13-k base pairs. Of these 7,617,600 possible pairs, 521 were found to have unrealistically high inferred rates of recombination (r > 0.001), and are likely errors. These errors probably occurred as a result of the combined effect of undetected errors in genotype calling, unresolved paralogy in the reference genome that complicates mapping, and differences between the reference loblolly genome that was used for SNP design and the lodgepole pine genomes. As a result, recombination rates that were low (r < 0.001) were expected to be relatively accurate, but we do not draw any inferences about high recombination estimates among loci.
Associations with principal components of environments
To compare inference from co-association networks to another multivariate approach, we conducted a principal components analysis of environments using the function prcomp() in R. Then, we used Bayenv2 to test associations with PC axes as described above and used BF > 2 as a criterion for the significance of a SNP on a PC axis. Note that this criterion is less conservative than that used to identify candidate SNPs for the network analysis (because it did not require the additional criterion of a significant Bonferroni-corrected P value), so it should result in greater overlap between PC candidate SNPs and top candidate SNPs based on univariate associations.
Enrichment of co-expressed genes
The co-expression data used in this study was previously published by [55]. To determine if adaptation cluster members had similar gene functions, we examined their gene expression patterns in response to seven growth chamber climate treatments using previously published RNAseq data [55]. Expression data was collected on 44 seedlings from a single sampling location, raised under common conditions, and then exposed to growth chamber environments that varied in their temperature, moisture, and photoperiod regimes. We used Fisher's exact test to determine if genes with a significant climate treatment effect were over-represented in each of the four major groups and across all adaptation candidates relative to the other sequenced and expressed genes. In addition, Yeaman et al. [55] used weighted gene co-expression network analysis (WGCNA) to identify eight clusters of co-regulated genes among the seven climate treatments. We used a Fisher's exact test to determine if these previously identified expression clusters were over-represented in the any of the four major groups relative to the other sequenced and expressed genes.
Galaxy biplots
To give insight into how the species has evolved to inhabit multivariate environments relative to the ancestral state, we visualized the magnitude and direction of associations between the derived allele frequency and environmental variables. Allelic correlations with any pair of environmental variables can be visualized by plotting the value of the non-parametric rank correlation Spearman's ρ of the focal allele with variable 1 against the value with variable 2. Spearman's ρ can be calculated with or without correction for population structure. Note also that the specific location of any particular allele in a galaxy biplot depends on the way alleles are coded. SNP data were coded as 0, 1, or 2 copies of the loblolly reference allele. If the reference allele has positive Spearman's ρ with temperature and precipitation, then the alternate allele has a negative Spearman's ρ with temperature and precipitation. For this reason, the alternate allele at a SNP should be interpreted as a reflection through the origin (such that quadrants 1 and 3 are symmetrical and quadrants 2 and 4 are symmetrical if the reference allele is randomly chosen).
A prediction ellipse was used to visualize the genome-wide pattern of covariance in allelic effects on a galaxy biplot. For two variables, the 2 × 2 variance-covariance matrix of Cov(ρ(f, E1), ρ(f, E2)), where f is the allele frequency and Ex is the environmental variable, has a geometric interpretation that can be used to visualize covariance in allelic effects with ellipses. The covariance matrix defines both the spread (variance) and the orientation (covariance) of the ellipse, while the expected values or averages of each variable (E[E1] and E[E2]) represent the centroid or location of the ellipse in multivariate space. The geometry of the two-dimensional (1 − α) × 100% prediction ellipse on the multivariate normal distribution can then be approximated by
$$ {l}_j=\sqrt{\uplambda_{\mathrm{j}}{\upchi^2}_{df=2,\upalpha}}, $$
where lj represents the lengths of the major (j = 1) and minor (j = 2) axes on the ellipse, respectively, λj represents the eigenvalues of the covariance matrix, and χ2df = 2,α represents the value of the χ2 distribution for the desired α value [95,96,97]. In the results, we plot the 95% prediction ellipse (α = 0.05) corresponding to the volume within which 95% of points should fall assuming the data is multivariate normal, using the function ellipsoidPoints() in the R package cluster [98]. This approach will work when there is a large number of unlinked SNPs in the set being visualized; if used on a candidate set with a large number of linked SNPs and/or a small candidate set with non-random assignment of alleles (i.e., allele assigned according to a reference), the assumptions of this visualization approach will be violated.
Visualization of allele frequencies on the landscape
ESRI ArcGIS v10.2.2 was used to visualize candidate SNP frequencies across the landscape. Representative SNPs having the most edges within each sub-network were chosen and plotted against climatic variables representative of those co-association modules. Mean allele frequencies were calculated for each sampled population and plotted. Climate data and 1-km resolution rasters were obtained using ClimateWNA v5.40 [84] and shaded with color gradients scaled to the range of climates across the sampling locations. The climates for each sampling location were also plotted, as some sampling locations were at especially high or low elevations relative to their surrounding landscapes. For clarity, only sampling locations containing at least two sampled individuals were plotted.
The simulations used in this study are identical to a subset of those previously published by [62, 63]. Briefly, the simulator uses forward-in-time recurrence equations to model the evolution of independent haploid SNPs on a quasi-continuous square landscape. We modeled three demographic histories that resulted in the same overall neutral FST for each demography, but demographic history determined the distribution of FST's around that mean. Isolation by distance (IBD) had the lowest variance, followed by demographic expansion from a single refuge (1R), and demographic expansion from two refugia 2R had the highest variance. The landscape size was 360 × 360 demes, and migration was determined by a discretized version of a Gaussian dispersal kernel. Carrying capacity per deme differed slightly for each scenario to give the same overall neutral FST = 0.05. IBD was run until equilibrium at 10,000 generations, but 1R and 2R were only run for 1000 generations in order to mimic the expansion of lodgepole pine since the last glacial maximum [99]. All selected loci adapted to a computer-generated landscape with a weak north-south cline and spatial heterogeneity at smaller spatial scales with varying strengths of selection from weak (s = 0.001) to strong (s = 0.1), see [62, 63] for more details.
The simulations were then expanded in the following way: for each of the 22 environmental variables for lodgepole pine populations, we used interpolation to estimate the value of the variable at the simulated locations. This strategy preserved the correlation structure among the 22 environmental variables. For each of the 22 variables, we calculated the uncorrected rank correlation (Spearman's ρ) between allele frequency and environment. The 23rd computer-generated environment was not included in analysis, as it was meant to represent the hypothetical situation that there is a single unmeasured (and unknown) environmental variable that is the driver of selection. The 23rd environment was correlated from 0 to 0.2 with the other 22 variables.
We compared two thresholds for determining which loci were retained for co-association network analysis, keeping loci with either: (i) a P value lower than the Bonferroni correction (0.05/(no. environments * no. simulated loci)) and (ii) a log-10 Bayes factor (BF) > 2 (for at least one of the environmental variables). Using both criteria is more stringent and both were used in the lodgepole pine analysis. In the simulations, however, we found that using both criteria resulted in no false positives in the outlier list (see the "Results" section); therefore we used only the first of these two criteria so that we could understand how false positives may affect interpretation of the co-association network analysis. For a given set of outliers (e.g., only false positives or false positives and true positives), hierarchical clustering and undirected graph networks were built in the same manner as described for the lodgepole pine data.
LD:
Principal components
SNP:
Single-nucleotide polymorphism
Hansen TF. The evolution of genetic architecture. Annu Rev Ecol Evol Syst. 2006;37:123–57.
Orr HA. Adaptation and the cost of complexity. Evolution. 2000;54:13–20.
Wang Z, Liao B-Y, Zhang J. Genomic patterns of pleiotropy and the evolution of complexity. Proc Natl Acad Sci U S A. 2010;107:18034–9.
Aeschbacher S, Bürger R. The effect of linkage on establishment and survival of locally beneficial mutations. Genetics. 2014;197:317–36.
Reeve J, Ortiz-Barrientos D, Engelstädter J. The evolution of recombination rates in finite populations during ecological speciation. Proc Biol Sci. 2016;283. https://doi.org/10.1098/rspb.2016.1243.
Barton NH. Genetic linkage and natural selection. Philos Trans R Soc Lond B Biol Sci. 2010;365:2559–69.
Wagner GP, Zhang J. The pleiotropic structure of the genotype-phenotype map: the evolvability of complex organisms. Nat Rev Genet. 2011;12:204–13.
Paaby AB, Rockman MV. The many faces of pleiotropy. Trends Genet. 2013;29:66–73.
Savolainen O, Lascoux M, Merilä J. Ecological genomics of local adaptation. Nat Rev Genet. 2013;14:807–20.
Slatkin M. Gene flow and selection in a cline. Genetics. 1973;75:733–56.
Slatkin M. Spatial patterns in the distributions of polygenic characters. J Theor Biol. 1978;70:213–28.
Barton NH. Clines in polygenic traits. Genet Res. 1999;74:223–36.
Felsenstein J. The theoretical population genetics of variable selection and migration. Annu Rev Genet. 1976;10:253–80.
Haldane JBS. The theory of a cline. J Genet. 1948;48:277–84.
Haldane JBS. A mathematical theory of natural and artificial selection (Part VI, Isolation). Math Proc Cambridge Philos Soc. 1930;26:220.
Rellstab C, Gugerli F, Eckert AJ, Hancock AM, Holderegger R. A practical guide to environmental association analysis in landscape genomics. Mol Ecol. 2015;24:4348–70.
Hancock AM, Brachi B, Faure N, Horton MW, Jarymowycz LB, Sperone FG, et al. Adaptation to climate across the Arabidopsis thaliana genome. Science. 2011;334:83–6.
Boyle EA, Li YI, Pritchard JK. An expanded view of complex traits: from polygenic to omnigenic. Cell. 2017;169:1177–86.
Wagner GP, Pavlicev M, Cheverud JM. The road to modularity. Nat Rev Genet. 2007;8:921–31.
Hill WG, Zhang X-S. Assessing pleiotropy and its evolutionary consequences: pleiotropy is not necessarily limited, nor need it hinder the evolution of complexity. Nat Rev Genet. 2012. https://doi.org/10.1038/nrg2949-c1.
Hill WG, Zhang X-S. On the pleiotropic structure of the genotype–phenotype map and the evolvability of complex organisms. Genetics. 2012;190:1131–7.
Rockman MV. The QTN program and the alleles that matter for evolution: all that's gold does not glitter. Evolution. 2012;66:1–17.
Paaby AB, Rockman MV. Pleiotropy: what do you mean? Reply to Zhang and Wagner. Trends Genet. 2013;29:384.
Wagner GP, Zhang J. Universal pleiotropy is not a valid null hypothesis: reply to Hill and Zhang. Nat Rev Genet. 2012;13:296.
Wagner GP. Homologues, natural kinds and the evolution of modularity. Am Zool. 1996;36:36–43.
Le Nagard H, Chao L, Tenaillon O. The emergence of complexity and restricted pleiotropy in adapting networks. BMC Evol Biol. 2011;11:326.
Griswold CK. Pleiotropic mutation, modularity and evolvability. Evol Dev. 2006;8:81–93.
Le Corre V, Kremer A. Genetic variability at neutral markers, quantitative trait loci and trait in a subdivided population under selection. Genetics. 2003;164:1205–19.
Hill WG, Robertson A. The effect of linkage on limits to artificial selection. Genet Res. 1966;8:269–94.
Yeaman S. Genomic rearrangements and the evolution of clusters of locally adaptive loci. Proc Natl Acad Sci U S A. 2013;110:E1743–51.
Yeaman S, Aeschbacher S, Bürger R. The evolution of genomic islands by increased establishment probability of linked alleles. Mol Ecol. 2016;25:2542–58.
Kirkpatrick M. Chromosome inversions, local adaptation and speciation. Genetics. 2006;173:419–34.
Schwander T, Libbrecht R, Keller L. Supergenes and complex phenotypes. Curr Biol. 2014;24:R288–94.
Lenormand T, Otto SP. The evolution of recombination in a heterogeneous environment. Genetics. 2000;156:423–38.
Guillaume F. Migration-induced phenotypic divergence: the migration-selection balance of correlated traits. Evolution. 2011;65:1723–38.
Chebib J, Guillaume F. What affects the predictability of evolutionary constraints using a G-matrix? The relative effects of modular pleiotropy and mutational correlation. Evolution. 2017. https://doi.org/10.1111/evo.13320.
Houle D, Mezey J, Galpern P. Interpretation of the results of common principal components analyses. Evolution. 2002;56:433–40.
Frichot E, Schoville SD, Bouchard G, François O. Testing for associations between loci and environmental gradients using latent factor mixed models. Mol Biol Evol. 2013;30:1687–99.
Günther T, Coop G. Robust identification of local adaptation from allele frequencies. Genetics. 2013;195:205–20.
Gautier M. Genome-wide scan for adaptive divergence and association with population-specific covariates. Genetics. 2015;201:1555–79.
Lasky JR, Des Marais DL, McKay JK, Richards JH, Juenger TE, Keitt TH. Characterizing genomic variation of Arabidopsis thaliana: the roles of geography and climate. Mol Ecol. 2012;21:5512–29.
Benestan L, Quinn BK, Maaroufi H, Laporte M, Clark FK, Greenwood SJ, et al. Seascape genomics provides evidence for thermal adaptation and current-mediated population structure in American lobster (Homarus americanus). Mol Ecol. 2016;25:5073–92.
Hedrick PW. Genetic polymorphism in heterogeneous environments: a decade later. Annu Rev Ecol Syst. 1986;17:535–66.
Hedrick PW, Ginevan ME, Ewing EP. Genetic polymorphism in heterogeneous environments. Annu Rev Ecol Syst. 1976;7:1–32.
Barton NH. Multilocus clines. Evolution. 1983;37:454–71.
Yeaman S, Hodgins KA, Lotterhos KE, Suren H, Nadeau S, Degner JC, et al. Convergent local adaptation to climate in distantly related conifers. Science. 2016;353:1431–3.
Suren H, Hodgins KA, Yeaman S, Nurkowski KA, Smets P, Rieseberg LH, et al. Exome capture from the spruce and pine giga-genomes. Mol Ecol Resour. 2016;16:1136–46.
Hodgins KA, Yeaman S, Nurkowski KA, Rieseberg LH, Aitken SN. Expression divergence Is correlated with sequence evolution but not positive selection in conifers. Mol Biol Evol. 2016;33:1502–16.
Eckert AJ, Bower AD, González-Martínez SC, Wegrzyn JL, Coop G, Neale DB. Back to nature: ecological genomics of loblolly pine (Pinus taeda, Pinaceae). Mol Ecol. 2010;19:3789–805.
Eckert AJ, van Heerwaarden J, Wegrzyn JL, Nelson CD, Ross-Ibarra J, González-Martínez SC, et al. Patterns of population structure and environmental associations to aridity across the range of loblolly pine (Pinus taeda L., Pinaceae). Genetics. 2010;185:969–82.
Alberto FJ, Aitken SN, Alía R, González-Martínez SC, Hänninen H, Kremer A, et al. Potential for evolutionary responses to climate change—evidence from tree populations. Glob Chang Biol. 2013;19:1645–61.
Howe GT, Aitken SN, Neale DB, Jermstad KD, Wheeler NC, Chen THH. From genotype to phenotype: unraveling the complexities of cold adaptation in forest trees. Can J Bot. 2003;81:1247–66.
Liepe KJ, Hamann A, Smets P, Fitzpatrick CR, Aitken SN. Adaptation of lodgepole pine and interior spruce to climate: implications for reforestation in a warming world. Evol Appl. 2016;9:409–19.
Illingworth K. Study of lodgepole pine genotype-environment interaction in B.C. In: Proceedings International Union of Forestry Research Organizations (IUFRO) Joint Meeting of Working parties: Douglas-fir provenances, Lodgepole Pine Provenances, Sitka Spruce Provenances and Abies Provenances. Vancouver, British Columbia; 1978. p. 151–158.
Yeaman S, Hodgins KA, Suren H, Nurkowski KA, Rieseberg LH, Holliday JA, et al. Conservation and divergence of gene expression plasticity following c. 140 million years of evolution in lodgepole pine (Pinus contorta) and interior spruce (Picea glauca×Picea engelmannii). New Phytol. 2014;203:578–91.
Blumwald E, Aharon GS, Apse MP. Sodium transport in plant cells. Biochimica et Biophysica Acta (BBA) - Biomembranes. 2000;1465:140–51.
Ahlfors R, Lång S, Overmyer K, Jaspers P, Brosché M, Tauriainen A, et al. Arabidopsis RADICAL-INDUCED CELL DEATH1 belongs to the WWE protein-protein interaction domain protein family and modulates abscisic acid, ethylene, and methyl jasmonate responses. Plant Cell. 2004;16:1925–37.
Amasino RM, Michaels SD. The timing of flowering. Plant Physiol. 2010;154:516–20.
Singh D, Laxmi A. Transcriptional regulation of drought response: a tortuous network of transcriptional factors. Front Plant Sci. 2015;6:895.
Walters RG, Shephard F, Rogers JJM, Rolfe SA, Horton P. Identification of mutants of Arabidopsis defective in acclimation of photosynthesis to the light environment. Plant Physiol. 2003;131:472–81.
De La Torre A, Ingvarsson PK, Aitken SN. Genetic architecture and genomic patterns of gene flow between hybridizing species of Picea. Heredity. 2015;115:153–64.
Lotterhos KE, Whitlock MC. Evaluation of demographic history and neutral parameterization on the performance of F ST outlier tests. Mol Ecol. 2014;23:2178–92.
Lotterhos KE, Whitlock MC. The relative power of genome scans to detect local adaptation depends on sampling design and statistical method. Mol Ecol. 2015;24:1031–46.
Christians JK, Senger LK. Fine mapping dissects pleiotropic growth quantitative trait locus into linked loci. Mamm Genome. 2007;18:240–5.
Charlesworth B, Nordborg M, Charlesworth D. The effects of local selection, balanced polymorphism and background selection on equilibrium patterns of genetic diversity in subdivided populations. Genet Res. 1997;70:155–74.
Charlesworth B. The effects of deleterious mutations on evolution at linked sites. Genetics. 2012;190:5–22.
Charlesworth B, Morgan MT, Charlesworth D. The effect of deleterious mutations on neutral molecular variation. Genetics. 1993;134:1289–303.
Hoban S, Kelley JL, Lotterhos KE, Antolin MF, Bradburd G, Lowry DB, et al. Finding the genomic basis of local adaptation: pitfalls, practical solutions, and future directions. Am Nat. 2016;188:379–97.
Klopfstein S, Currat M, Excoffier L. The fate of mutations surfing on the wave of a range expansion. Mol Biol Evol. 2006;23:482–90.
Hofer T, Ray N, Wegmann D, Excoffier L. Large allele frequency differences between human continental groups are more likely to have occurred by drift during range expansions than by selection. Ann Hum Genet. 2009;73:95–108.
Zhang B, Horvath S. A general framework for weighted gene co-expression network analysis. Stat Appl Genet Mol Biol. 2005;4:Article 17.
Bella IE, Navratil S. Growth losses from winter drying (red belt damage) in lodgepole pine stands on the east slopes of the Rockies in Alberta. Can J For Res. 1987;17:1289–92.
Aitken SN, Whitlock MC. Assisted gene flow to facilitate local adaptation to climate change. Annu Rev Ecol Evol Syst. 2013;44:367–88.
Mbogga MS, Hamann A, Wang T. Historical and projected climate data for natural resource management in western Canada. Agric For Meteorol. 2009;149:881–90.
Hember RA, Kurz WA, Coops NC. Relationships between individual-tree mortality and water-balance variables indicate positive trends in water stress-induced tree mortality across North America. Glob Chang Biol. 2017;23:1691–710.
Hember RA, Kurz WA, Coops NC. Increasing net ecosystem biomass production of Canada's boreal and temperate forests despite decline in dry climates. Global Biogeochem Cycles. 2017;31:2016GB005459.
Mahony CR, Cannon AJ, Wang T, Aitken SN. A closer look at novel climates: new methods and insights at continental to landscape scales. Glob Chang Biol. 2017. https://doi.org/10.1111/gcb.13645.
Fitzpatrick MC, Keller SR. Ecological genomics meets community-level modelling of biodiversity: mapping the genomic landscape of current and future environmental adaptation. Ecol Lett. 2015;18:1–16.
Yeaman S, Whitlock MC. The genetic architecture of adaptation under migration-selection balance. Evolution. 2011;65:1897–911.
Kremer A, Le Corre V. Decoupling of differentiation between traits and their underlying genes in response to divergent selection. Heredity. 2012;108:375–85.
Le Corre V, Kremer A. The genetic differentiation at quantitative trait loci under local adaptation. Mol Ecol. 2012;21:1548–66.
Flaxman SM, Feder JL, Nosil P. Genetic hitchhiking and the dynamic buildup of genomic divergence during speciation with gene flow. Evolution. 2013;67:2577–91.
Bürger R, Akerman A. The effects of linkage and gene flow on local adaptation: A two-locus continent–island model. Theor Popul Biol. 2011;80:272–88.
Wang T, Hamann A, Spittlehouse DL, Murdock TQ. ClimateWNA—high-resolution spatial climate data for western North America. J Appl Meteorol Climatol. 2012;51:16–29.
Daly C, Halbleib M, Smith JI, Gibson WP, Doggett MK, Taylor GH, et al. Physiographically sensitive mapping of climatological temperature and precipitation across the conterminous United States. Int J Climatol. 2008;28:2031–64.
Neale DB, Wegrzyn JL, Stevens KA, Zimin AV, Puiu D, Crepeau MW, et al. Decoding the massive genome of loblolly pine using haploid DNA and novel assembly strategies. Genome Biol. 2014;15:R59.
Li H, Durbin R. Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009;25:1754–60.
DePristo MA, Banks E, Poplin R, Garimella KV, Maguire JR, Hartl C, et al. A framework for variation discovery and genotyping using next-generation DNA sequencing data. Nat Genet. 2011;43:491–8.
Conesa A, Götz S. Blast2GO: a comprehensive suite for functional analysis in plant genomics. Int J Plant Genomics. 2008;2008:619832.
Alexa A, Rahnenführer J. Gene set enrichment analysis with topGO. 2009. https://bioconductor.riken.jp/packages/3.2/bioc/vignettes/topGO/inst/doc/topGO.pdf. Accessed 1 Jan 2017.
Blair LM, Granka JM, Feldman MW. On the stability of the Bayenv method in assessing human SNP-environment associations. Hum Genomics. 2014;8:1.
Müllner D. fastcluster: fast hierarchical, agglomerative clustering routines for RandPython. J Stat Softw. 2013;53. https://doi.org/10.18637/jss.v053.i09.
Csardi G, Nepusz T. The igraph software package for complex network research. InterJournal Complex Syst. 2006;1695:1–9.
Margarido GRA, Souza AP, Garcia AAF. OneMap: software for genetic mapping in outcrossing species. Hereditas. 2007;144:78–9.
Pison G, Struyf A, Rousseeuw PJ. Displaying a clustering with CLUSPLOT. Comput Stat Data Anal. 1999;30:381–92.
Kaufman L, Rousseeuw PJ. Finding groups in data: an introduction to cluster analysis. New Jersey: Wiley Series in Probability and Statistics; 2009.
Titterington DM. Algorithms for computing D-optimal design on finite design spaces, Proceedings of the 1976 Conference on Information Science and Systems; 1976. p. 213–6.
Maechler M, Rousseeuw P, Struyf A, Hubert M, Hornik K. cluster: cluster analysis basics and extensions. 2018.
Hewitt G. The genetic legacy of the quaternary ice ages. Nature. 2000;405:907–13.
Lotterhos KE, Yeaman S, Degner J, Aitken S, Hodgins KA. Data from: modularity of genes involved in local adaptation to climate despite physical linkage. https://doi.org/10.5061/dryad.r67hd7t.
We thank Sebastian E. Ramos-Onsins, Tanja Pyhäjärvi, an anonymous reviewer, and PCI Evol Biol for comments that greatly improved this manuscript. Mike Whitlock provided valuable advice and feedback on various aspects of the research. We thank Jeremy Yoder for organizing the SNP chip data used for calculating the recombination rates. Pia Smets, Connor Fitzpatrick, and Sarah Markert assembled and grew genetic materials, and Kristin Nurkowski prepared sequence capture libraries. Tongli Wang and Andreas Hamann selected populations based on climatic distribution of species. Seeds were kindly donated by 63 forest companies and agencies in Alberta and British Columbia (listed at http://adaptree.forestry.ubc.ca/seed-contributors/).
KEL was supported by a grant from the National Science Foundation (502483). This research was part of the AdapTree project, led by SNA and by Genome Canada (LSARP2010_161REF), Genome BC, Genome Alberta, Alberta Innovates BioSolutions, the Forest Genetics Council of British Columbia, the British Columbia Ministry of Forests, Lands and Natural Resource Operations (BCMFLNRO), Virginia Polytechnic University, and the University of British Columbia.
The datasets and analysis code supporting the conclusions of this article are published on Dryad https://doi.org/10.5061/dryad.r67hd7t [100].
Department of Marine and Environmental Sciences, Northeastern Marine Science Center, 430 Nahant Rd, Nahant, MA, 01908, USA
Katie E Lotterhos
Department of Biological Sciences, University of Calgary, Calgary, AB, T2N1N4, Canada
Sam Yeaman
Department of Forest and Conservation Sciences, Faculty of Forestry, Vancouver, BC, V6T 1Z4, Canada
Jon Degner
& Sally Aitken
School of Biological Sciences, Monash University, Wellington Rd, Clayton, Melbourne, VIC, 3800, Australia
Kathryn A Hodgins
Search for Katie E Lotterhos in:
Search for Sam Yeaman in:
Search for Jon Degner in:
Search for Sally Aitken in:
Search for Kathryn A Hodgins in:
KEL conceived of the analysis, conducted analyses, and lead writing of the manuscript. KH and SY did the bioinformatics and various specific analyses. JD created the allele frequency landscape plots. SA led the AdapTree project. All authors contributed to the writing of the manuscript. All authors read and approved the final manuscript.
Correspondence to Katie E Lotterhos.
Supplementary Figures. Fourteen supplementary figures associated with the manuscript. (PDF 12466 kb)
Table S1. Results from GO analysis for all top candidate genes and for each group. (XLSX 9 kb)
Table S2. Top candidate genes and their annotations. (XLSX 72 kb)
Lotterhos, K.E., Yeaman, S., Degner, J. et al. Modularity of genes involved in local adaptation to climate despite physical linkage. Genome Biol 19, 157 (2018). https://doi.org/10.1186/s13059-018-1545-7
Received: 25 January 2018
Landscape genomics
Genome-environment associations
Genome-wide association studies (GWAS)
Ion antiporters
Auxin biosynthesis | CommonCrawl |
\begin{document}
\title{Quantum Mechanics as a Classical Theory III:\
Epistemology} \begin{abstract} The two previous papers developed quantum mechanical formalism from classical mechanics and two additional postulates. In the first paper it was also shown that the uncertainty relations possess no ontological validity and only reflect the formalism's limitations. In this paper, a Realist Interpretation of quantum mechanics based on these results is elaborated and compared to the Copenhagen Interpretation. We demonstrate that von Neumann's proof of the impossibility of a hidden variable theory is not correct, independently of Bell's argumentation. A local hidden variable theory is found for non-relativistic quantum mechanics, which is nothing else than newtonian mechanics itself. We prove that Bell's theorem does not imply in a non-locality of quantum mechanics, and also demonstrate that Bohm's theory cannot be considered a true hidden variable theory. \end{abstract}
\section{Introduction}
The first two papers developed quantum mechanics within the fundamental principles of it's mathematical formalism. For this, we use no more than classical mechanics and two rather natural postulates that do not modify its classical character.
We demonstrated in the first paper that the fundamental equations are those that involve the density function. We also demonstrated that the uncertainty relations are not valid for these equations and are, for this reason, an indication of the formalism's limitation of using equations for the probability amplitudes. The uncertainty relations form the basis of the Copenhagen Interpretation\cite{1,2} and the contestation of their ontological character has profound implications for quantum mechanics epistemology.
A model in which the introduction of an observing system in a specific quantum problem is considered was constructed. This observing system has profound differences from those introduced by quantum mechanics' diverse measurement theories\cite{3}-\cite{9}.
In the present paper, we will develop a Realist Interpretation for quantum mechanics based on the results obtained in the first and second papers (hereafter abbreviated by (I) and (II)).
In the second section, we will quickly review the fundamental ideas of the Copenhagen Interpretation.
In the third section, we will establish a Realist Interpretation for quantum mechanics based on the results of this series' first two papers\cite{10}. Certain points of the Copenhagen interpretation presented in the previous section will also be criticized.
The fourth section will show that von Neumann's demonstration of the impossibility of a hidden variable theory is not correct, independently of Bell's argument\cite{11}, which we prove inadequate. A local variable theory is then found for the non-relativistic problem which is nothing more than newtonian mechanics.
In the fifth section, we demonstrate that Bell's theorem does not imply in a non locality of quantum mechanics.
In the sixth section, we reinterpret Bohm's theory\cite{7,12} and show that it cannot be considered a true hidden variable theory.
In the last section, we make our final conclusions for this series of papers.
\section{The Copenhagen Interpretation}
We will present the main ideas of the Copenhagen Interpretation through a\-xi\-oms\cite{13}. We are not interested in the formalism, but in the interpretation which accompanies it. For this reason, the formal apparatus will not be developed.
Let us first define the meaning of quantum state:
\begin{description} \item[(def1)] The state of a physical system is represented by $\phi $, a function of {\it s} coordinates. This function does not necessarily represent any distribution of objects within space, because the {\it s} variables that index it possess no intuitive association with objects. It is not defined in terms of observables but only as a function within configuration space. These {\it s} variables represent the system's degrees of freedom. \end{description}
The Copenhagen Interpretation's axioms are\cite{13}:
\begin{description} \item[(CAx1)] The function $\phi $, which, in general, can be complex should be square integrable.
\item[(CAx2)] The function $\phi $ should be single valued.
\item[(CAx3)] For every observable $p$ there is a single operator $P$ acting upon the state function. In particular, we obtain Schroedinger's equation (for the amplitudes) substituting, in the hamiltonian function, the variables $q$ and $p$ by their respective operators.
\item[(CAx4)] The only possible values that can be observed when a measurement is made over an observable $p$ are the eigenvalues of the following equation \begin{equation} \label{(1)}P\psi _\lambda =p_\lambda \psi _\lambda , \end{equation} where $\psi _\lambda $ satisfies axioms (CAx1) and (CAx2).
\item[(CAx5)] When a system is in a certain state $\phi $, the expected value of a series of measurements of the observable $p$ is \begin{equation} \label{(2)}\overline{p}=\frac{\int \phi ^{*}P\phi d\tau }{\int \phi ^{*}\phi d\tau }, \end{equation} where $P$ is the operator which corresponds to $p$. \end{description}
All of quantum mechanical formalism can be obtained from these five axioms. We will not do this here. Nevertheless, one particular result which interests us will be presented, without demonstration, as the following theorem:
\begin{description} \item[(CT1)] If $Q$ and $P$ are canonically conjugated operators, then the dispersion associated with the simultaneous measurements of their eigenvalues is given by \begin{equation} \label{(3)}\left[ Q,P\right] =i\hbar . \end{equation} \end{description}
Even though this result is a theorem, it's interpretation is fundamental for the Copenhagen Interpretation of quantum mechanics and it is called the Uncertainty Principle. Thus, according to the Copenhagen Interpretation\cite{ 14}, we have the following interpretative postulates:
\begin{description} \item[(CP1)] In a given experiment, it is not possible to measure with absolute precision both the position and the momentum of a given particle. The minimum dispersion in such a measurement is given by relation (\ref{(3)} ). \end{description}
Because it adequately describes nature's behavior, this postulate possess ontological {\it status}. Nevertheless, this description is not objective, but subjective, and entails:
\begin{description} \item[(CP2)] Any physical system's attributes, for example, its trajectories, only come into existence when observed. \end{description}
and more,
\begin{description} \item[(CP3)] An inevitable and uncontrollable interaction occurs between the measured object and the observer. \end{description}
The dispersion relation (\ref{(3)}) served as the mathematical element for Bohr to expose his ideas about the complementarity which he had been developing since his first contact with the dual wave-particle behavior of some experiments. Once it is assumed that there can be no space-time coordination, causality and the notion of a complete classical description become obsolete\cite{15}. Thus
\begin{description} \item[(CP4)] Combining a classical observational system with a quantum system, one can only measure complementary values and, to express them in classical terms, the system must also be described in terms of classical figures which are also complementary. \end{description}
Finally, to establish the state function's referent, it is postulated that
\begin{description} \item[(CP5)] The wave function expresses our knowledge of events. \end{description}
In relation to the influence of the act of observing on measurement, we can say that
\begin{description} \item[(CP6)] An unforeseeable and discontinuous reduction of the state vector, formally represented by a projection operation occurs during the act of measurement. \end{description}
These are, shortly put, the main ideas which form the Copenhagen Interpretation. In the next section we will criticize these ideas through the results obtained in this series of papers.
\section{The Realist Interpretation}
In this section we will demonstrate how a realist view can be made compatible with quantum mechanics' formalism as developed in the two previous papers.
First, it is important to note that the observer, massively discussed in the various quantum mechanical interpretations, does not play any role in it's formalism. When any problem is to be solved, such as to find the energy levels of an atomic system for example, the possible interactions between this system and the external world are never formally taken into account. This can be seen in the present formalism through the derivations of the quantum equations from the Liouville's equation for a closed system. The introduction of the observer into the orthodox epistemology is not only {\it ad hoc}, it is also incompatible with quantum mechanic's main postulate, which is Schroedinger's equation, since a discontinuous reduction of the state vector, which does not satisfy this equation\cite{8}, must be considered when the observer is taken into account.
Here we wish to differentiate statements such as ''the value of the property $P$ of the physical object $y$ is equal to $x$'' from others such as: ''the observer $z$ found the value {\it y} for the property $P$ of the physical object $x$ using the measurement technique $t$ and the sequence of operations $o${\it ''}. In fact, while the first proposition can be mathematically represented as $P(x)=y$, the second one should be given as $ P^{\prime }\left( x,z,t,o\right) $, so that $P$ and $P^{\prime }$ possess radically different referents\cite{16}.
We can give an example of these concepts using the treatment given to an external observer system as applied to Young's interference experiment in (I). In the usual treatment given through Schroedinger's equation, we have a state function which will depend only on the variables $F\left( {\bf x}_1, {\bf p}_1;t\right) $ associated to the system's internal degrees of freedom. In the treatment given in (I), we should consider not only $F\left( {\bf x} _1,{\bf p}_1;t\right) $, but also the observing system's probability density function $F\left( {\bf x}_2,{\bf p}_2;t\right) $. Any variation of the pragmatic parameters which define this distribution can alter the probability distribution which we intend to calculate.
In the measurement theory associated to the Copenhagen Interpretation, the observer's conscience must be postulated to avoid an infinite regression of reductions in the state vector\cite{3,4,5,8}. This infinite regression can, nevertheless, be easily understood through the present theory's point of view. As was observed by Everett\cite{6}, the question about the infinite regression of reductions in the state vector is intimately linked to considerations of open and closed systems. When studying the Young's interference experiment, we considered {\it system2} as an external observer and take into account its interaction with {\it system1} through the concept of scattering cross-section. If the {\it system1} and {\it system2} are to be considered as part of a greater system, then their interaction will be taken into account exactly and the equation representing the probability distribution of the whole system will be $F\left( 1,2\right) $, which is, in general, very different from the uncoupled distributions $F_1\left( 1\right) $ and $F_2\left( 2\right) $ used to represent the open system.
In fact, the equation obtained in (I) for the probability density function of this experiment can no longer be transformed, through the Wigner-Moyal Infinitesimal Transformation, into the Schroedinger equation for the probability amplitude. Thus, the observed system's wave-like behavior of the {\it ensemble}'s statistics no longer needs to manifest itself. It will manifest itself depending on the intensity of the perturbation introduced by the observer system.
Of course, this is not an uncontrollable interaction between the measuring apparatus and the observing system. In fact, this idea emerged from Bohr's quantum postulate which saw the finite character of Plank's constant, interpreted by him as associated with the minimal quantity of energy a system can emit or absorb, as representing the impossibility of analysis. Note that, as was shown in (I), Plank's constant can't be used as a characteristic entity of a quantum system since that which is conventionally called the classical limit is independent of it.
Schroedinger's equation for the probability amplitudes represents systems in equilibrium. The absorption or emission of energy in these systems is done in such a manner that the system passes from one equilibrium state to another. It is this that we call the quantum jump. Of course, the non-equilibrium states are not subjected to such considerations (two-photon absorption is one example of such a situation). Observing the equation in which the external observing system is introduced through a Boltzmann equation, it can be seen that the quantum equation obtained {\it does not}, in general, have eigenvalues. Therefore, it is neither associated with quantized values of energy nor equilibrium situations: it does not represent quantum jumps. {\it Natura non facit saltus}. It should be also mentioned that, in the second paper, it was shown that strong gravitational fields will, in general, deny equilibrium situations and no quantization is expected. So, quantization cannot be considered as a fundamental manifestation of Nature.
We note also that in this interpretation the problem of considering the frontier between the classical system, usually the external observer, and the observed quantum system, does not occur. In the same manner, there is a symmetry in treatment, because the observing system can be considered an observed system and {\it vice-versa}, this being an important property if we desire to make relativistic generalizations of our arguments.
It becomes clear from the discussion above that a general theory of measurement, as was proposed by von Neumann\cite{3}, has few chances of success since, to admit such a theory, supposing that the description of the observer should appear in the equations, we must also admit that there is a way to unify, under the same formal apparatus, the innumerable existing observational techniques.
Having clarified the issue of the observer in quantum mechanics, we pass to a criticism of the postulates presented in the previous section:
\begin{description} \item[(CP1)] This postulate, as has been observed, has no ontological validity. Instead of determining a limitation to the classical concepts of space-time coordination, it determines a limitation of the quantum formalism given by Schroedinger's Second Equation for the probability amplitudes.
\item[(CP2)] Without the uncertainty relations, this postulate suggests a confusion between the concepts of having a known value and having a define value. Unlike what is stated by the Copenhagen Interpretation, the variables which index the density function do refer to the state of the {\it ensemble's } constituent components. Even if we do not know these values in a specific experimental arrangement, and for this reason treat them statistically, we assume that these values are well defined. In fact, in the limit of dispersion free {\it ensembles}, we reobtain newtonian trajectories.
\item[(CP3)] We have seen that this postulate is associated to the fact that the Schroe\-dinger equation for the probability amplitudes describes systems in an equilibrium situation and is therefore incapable, in this format, of giving information to the inter-phenomenon occurrences. In any manner, when we introduce the external observing system, we perceive that its interaction with the observed system is classical and controllable (deterministic), even if it is considered unknown and treated statistically.
\item[(CP4)] Once we have denied the ontological validity of the dispersion relations and assumed the figure of space-time coordination, we can no longer accept this postulate.
\item[(CP5)] There are no external psycho-physical observers. The density function supplies an objective description of the physical world and does not possess any relation with mental activities.
\item[(CP6)] See the criticism to postulate (CP3) and the discussion before it. \end{description}
In relation to the wave-particle duality, it has been demonstrated that all the results in which the systems present wave-like behavior can be explained through the quantized interaction which occurs between them\cite{17,18} through the use of the particle picture.
We are, at this point, ready to accept the definitions of reality and completion given by Einstein, Podolsky and Rosen\cite{19}. Yet there is, still, one problem. It has been recently proven that realist theories, called hidden variable theories, should possess non local behavior. Even if this behavior can be accepted by a realist theory, it contradicts the relativistic theories from which we derived quantum formalism presented in (II)\ (remember that, since there is no conscious observer in this theory, no distinction is made between signals and information as is usually done by those who wants to suppress the above cited contradiction). Such behavior is therefore, inconceivable within such formalism.
We will demonstrate, in the next sections, that quantum mechanics is a local theory.
\section{Hidden Variables}
In the previous section it was seen that, in sight of the results obtained in (I) and (II), quantum mechanics admits an epistemology completely different from that accepted as orthodox since the Solvay Congress in 1927. It has also been shown that this theory, of statistical character, is built upon a totally deterministic and local theory, which is no more than newtonian mechanics (in a non-relativistic approximation). In this manner, newtonian mechanics can be considered {\it the} quantum mechanic's hidden variable theory.
If this is true, we should demonstrate that von Neumann's proof\cite{3} on the impossibility of such a theory is mistaken.
In this section and the following, we will demonstrate that newtonian mechanics can be considered the hidden variable theory for quantum mechanics and that von Neumann's argument about the impossibility of a hidden variable theory is incorrect, independently of Bell's argumentation\cite{11}.
Von Neumann's axioms are
\begin{description} \item[(A1)] If an observable is represented by the operator $R$, then this observable's function $f$ is represented by $f(R)$.
\item[(A2)] The sum of many observables $R,S,..$ is represented by the operator $R+S+..$ whether they commute or not.
\item[(A3)] The correspondence between operators and observables is one to one.
\item[(A4)] If the observable $R$ is non negative, then its mean value $<R>$ is also non negative.
\item[(A5)] For arbitrary observables $R,S,..$ and arbitrary real numbers $ a,b,..$ we have \begin{equation} \label{(4)}\left\langle aR+bS+...\right\rangle =a\left\langle R\right\rangle +b\left\langle S\right\rangle +..., \end{equation} for all the possible {\it ensembles} for which the mean values can be calculated. \end{description}
{}From these axioms, von Neumann obtains the density operator $\rho $ by construction, together with it's properties. Among these properties is it's use for the calculation of the observable's mean values \begin{equation} \label{(5)}Tr\left( \rho R\right) =\left\langle R\right\rangle . \end{equation}
Yet, von Neumann argues that, for any hidden variable theory, we should have dispersion free states for which \begin{equation} \label{(6)}\left\langle R^2\right\rangle -\left\langle R\right\rangle ^2=0. \end{equation}
Using the result (\ref{(5)}) in (\ref{(6)}) for the observable \begin{equation}
\label{(7)}R=\left| \phi \right\rangle \left\langle \phi \right| , \end{equation} we reach \begin{equation}
\label{(8)}\left\langle \phi \right| \rho \left| \phi \right\rangle
=\left\langle \phi \right| \rho \left| \phi \right\rangle ^2, \end{equation}
for every amplitude $\left| \phi \right\rangle $.
In this case, von Neumann concludes that $\rho =0$ or $\rho =1$. The first hypothesis has no physical meaning and the second does not imply a dispersion free state for vector spaces with more than one dimension. In fact, in case of a space of dimension $d$ we get \begin{equation} \label{(9)}Tr\left( \rho \right) =Tr\left( 1\right) =d, \end{equation} Thus, the expression to the left of (\ref{(6)}) becomes \begin{equation} \label{(10)}\left\langle R^2\right\rangle -2\left\langle R\right\rangle ^2+\left\langle R\right\rangle ^2\left\langle 1\right\rangle , \end{equation} which is not equal to zero if $d\geq 2$. Thus, it is not possible to construct a hidden variable theory compatible with quantum mechanics.
With the advent, in 1952, of a supposedly hidden variable theory\cite{12} totally compatible with quantum mechanics, it became necessary to demonstrate that von Neumann's demonstration contained some mistake. To do this, Bell defended that the fifth axiom was responsible for this inconsistency; his argument was the following\cite{20}:
\begin{description} \item[B-arg.:] ''At first sight the required additivity of expectation values seems very reasonable, and it is rather the non-additivity of allowed values (eigenvalues) which requires explanation. Of course the explanation is well known: A measurement of a sum of non commuting observables cannot be made by combining trivially the results of separate observations on the two terms - it requires a quite distinct experiment. For example the measurement of $\sigma _x$ for a magnetic particle might be made with a suitably oriented Stern-Gerlach magnet. The measurement of $\sigma _y$ would require a different orientation, and of $\left( \sigma _x+\sigma _y\right) $a third and different orientation. But this explanation of the non-additivity of allowed values also established the non triviality of the additivity of expectation values. The latter is quite a peculiar property of quantum mechanical states, not to be expected {\it a priori}. There is no reason to demand it individually of the hypothetical dispersion-free states, whose function it is to reproduce the {\it measurable} peculiarities of quantum mechanics {\it when averaged over}.'' \end{description}
Obviously this argument cannot be accepted by the present theory. For the present theory, measurements are done in quantum mechanics exactly as in classical statistical mechanics and therefore possess the same characteristics. The argument says, beyond this, that the non-triviality of measures is due to the non-commutativity of the observables which one is measuring. Since we have proven that this non-commutativity has no ontological validity, we should reject this argument.
It can, nevertheless, be shown that von Neumann's demonstration is incorrect. Let us now pass on to this demonstration:
The {\it ensembles'} state is determined, in phase space, by the density functions $F\left( {\bf x},{\bf p};t\right) $. For a realist theory, a dispersion free {\it ensemble} constituted by $N$ particle systems is represented by the product \begin{equation} \label{(11)}F\left( {\bf x}_1..{\bf x}_N;{\bf p}_1..{\bf p}_N;t\right) =\prod_{i=1}^N\delta \left( {\bf x}_i-{\bf x}_i^0\left( t\right) \right) \delta \left( {\bf p}_i-{\bf p}_i^0\left( t\right) \right) , \end{equation} where each pair of Dirac's delta functions determines the trajectory - deterministic, causal and given by newtonian mechanics - of one of the components of the {\it ensemble's} constituent systems.
Using the Wigner-Moyal Infinitesimal Transformation we obtain the density function \begin{equation} \label{(12)}\rho \left( {\bf x}_1,\Delta {\bf x}_1;..;{\bf x}_N,\Delta {\bf x }_N;t\right) =\prod_{i=1}^N\delta \left( {\bf x}_i-{\bf x}_i^0\left( t\right) \right) \exp \left[ \frac i\hbar {\bf p}_i^0\left( t\right) \cdot \Delta {\bf x}_i\right] , \end{equation} using $\Delta x$ in order not to confuse them with the Dirac's delta functions. Taking the limit $\Delta x\rightarrow 0$, we obtain the density \begin{equation} \label{(13)}\rho \left( {\bf x}_1{\bf ,..,x}_N;t\right) =\prod_{i=1}^N\delta \left( {\bf x}_i-{\bf x}_i^0\left( t\right) \right) , \end{equation} as expected. Integrating this expression we find \begin{equation} \label{(14)}\int \rho \left( {\bf x}_1{\bf ,..,x}_N;t\right) d{\bf x}_1{\bf ..x}_N=1, \end{equation} which is in clear contradiction with expression (\ref{(9)}). In this manner we always have expression (\ref{(10)}) equal to zero. It is important to note that the operations of taking the limit in (\ref{(13)}) and integrating in (\ref{(14)}) are equivalent to take the trace $Tr\left( \rho \right) $.
We can therefore say that, from expression (\ref{(9)}) and for dispersion free states, we cannot conclude, as von Neumann did, that $\rho =0$ or $\rho =1$.
Note that we demonstrated the incorrectness of von Neumann's theorem above and, simultaneously showed that newtonian mechanics may be the hidden variable theory behind quantum mechanics. Yet this theory is local. Bell, extending the argument above, proved through a theorem, that every hidden variable theory should be non-local. We should, therefore, analyze his theorem.
\section{Bell's Theorem}
Consider two meters measuring two particles which are the product of a physical system's dissociation. The results given by these meters are represented by $A\left( {\bf a},\lambda \right) $ and $B\left( {\bf b} ,\lambda \right) $, where ${\bf a}$ and ${\bf b}$ are the meter's orientations and $\lambda $ is a set of hidden variable with a probability density $\rho \left( \lambda \right) $, which determine the quantum state of each of the {\it ensemble's} component systems. Writing the results in this manner we are assuming the locality thesis, since the values measured by a meter, {\it A} for example, do not depend on the other's configuration (in this case the ${\bf b}$ orientation).
We now ask if the correlation \begin{equation} \label{(15)}P\left( {\bf a},{\bf b}\right) =\int \rho \left( \lambda \right) A\left( {\bf a},\lambda \right) B\left( {\bf b},\lambda \right) d\lambda , \end{equation} where \begin{equation} \label{(16)}\int \rho \left( \lambda \right) d\lambda =1, \end{equation} can be equal to the value obtained through the quantum mechanical calculation. Suppose, for generality \begin{equation}
\label{(17)}\left| A\left( {\bf a},\lambda \right) \right| \leq 1\quad
;\quad \left| B\left( {\bf b},\lambda \right) \right| \leq 1\quad ;\quad
\left| P\left( {\bf a},{\bf b}\right) \right| \leq 1, \end{equation} We obtain, after some algebra that is independent of quantum mechanical considerations, the following inequality \begin{equation}
\label{(18)}\left| P\left( {\bf a},{\bf b}\right) -P\left( {\bf a},{\bf b}
^{\prime }\right) \right| +\left| P\left( {\bf a}^{\prime },{\bf b}^{\prime
}\right) +P\left( {\bf a^{\prime }},{\bf b}\right) \right| \leq 2, \end{equation} which is called Bell's inequality and should be obeyed by predictions of local theories.
{}From here Bell shows that the quantum correlation of spins does not obey this inequality. Bell's theorem thus states that no local hidden variable theory can reproduce quantum mechanics' results. His argumentation based on the experiment proposed by Bohm and Aharonov\cite{21}, is the following:
Take an {\it ensemble} of systems initially in singlet form. The probability amplitude associated to this {\it ensemble} is given by \begin{equation}
\label{(19)}\left| \Psi _S\right\rangle =\frac{\left| +\right\rangle \left|
-\right\rangle -\left| -\right\rangle \left| +\right\rangle }{\sqrt{2}}. \end{equation}
At a given moment, this system dissociates itself into {\it particle1} and {\it particle2} which are measured by {\it meter1} in direction ${\bf a}$ and {\it meter2} in direction ${\bf b}$. In this case $$
\left\langle \Psi _S\right| \sigma _a\sigma _b\left| \Psi _S\right\rangle
=\frac 12\cdot \left[ \left\langle +\right| \sigma _a\left| +\right\rangle
\left\langle +\right| \sigma _b\left| +\right\rangle -\left\langle +\right|
\sigma _a\left| -\right\rangle \left\langle -\right| \sigma _b\left| +\right\rangle -\right. $$ \begin{equation}
\label{(20)}\cdot \left. -\left\langle -\right| \sigma _a\left|
+\right\rangle \left\langle +\right| \sigma _b\left| -\right\rangle
+\left\langle -\right| \sigma _a\left| -\right\rangle \left\langle +\right|
\sigma _b\left| +\right\rangle \right] , \end{equation} where $\theta _{ab}$ is the angle between ${\bf a}$ and ${\bf b}$.
Now placing ${\bf a}$ in direction ${\bf z}$, we obtain \begin{equation}
\label{(21)}\left\langle \Psi _S\right| \sigma _a\sigma _b\left| \Psi _S\right\rangle =-\cos \theta _{ab}. \end{equation} This result violates Bell's inequality, when the arrangement \begin{equation} \label{(22)}\angle {\bf ab}^{\prime }=2\theta \quad ;\quad \angle {\bf ba} ^{\prime }=0\quad ;\quad \angle {\bf bb}^{\prime }=\angle {\bf aa}^{\prime }=\theta , \end{equation} is made along with the rotational symmetry of the calculation made in (\ref {(21)}). Then Bell is in position to interpret this result as saying that quantum mechanics has a non-local character.
Let us now suppose that the state of the {\it ensemble} that is to be measured is prepared in such a way that only one of it's component systems is measured at a time. To represent this system's state after the separation we have \begin{equation}
\label{(23)}\left| \Psi \right\rangle =\left| +\right\rangle \left|
-\right\rangle \quad or\quad \left| \Psi \right\rangle =\left|
-\right\rangle \left| +\right\rangle . \end{equation} This must be so since, according to Born's statistical interpretation, each system has probability one half to be in only one of the states above mentioned (in fact this question is related to that one about the state vector representing the state of {\it one system} - Schroendiger's dead and alive cat - or representing an {\it ensemble} of states; we are clearly opting for the former interpretation). In this case the correlation is given by \begin{equation}
\label{(24)}\left\langle \Psi _S\right| \sigma _a\sigma _b\left| \Psi _S\right\rangle =\frac 12\left[ \left\langle +\right| \sigma _a\left|
+\right\rangle \left\langle +\right| \sigma _b\left| +\right\rangle
+\left\langle -\right| \sigma _a\left| -\right\rangle \left\langle +\right|
\sigma _b\left| +\right\rangle \right] . \end{equation}
For this correlation and for the configuration (\ref{(22)}), we obtain the expressions $$ P\left( {\bf a},{\bf b}\right) =-\cos \theta \quad ;\quad P\left( {\bf a^{\prime }},{\bf b}\right) =-\cos {}^2\theta $$ \begin{equation} \label{(25)}P\left( {\bf a},{\bf b^{\prime }}\right) =-\cos 2\theta \quad ;\quad P\left( {\bf a^{\prime }},{\bf b}\right) =-\cos \theta \cos 2\theta \end{equation} which, when substituted in Bell's inequality, give us the expression \begin{equation}
\label{(26)}\left| \cos 2\theta -\cos \theta \right| +\left| \cos \theta
\right| \left| \cos 2\theta +\cos \theta \right| \leq 2 \end{equation} which, it can be shown, is always satisfied.
It becomes clear from these two treatments that the difference is caused by the fact that we have not considered, in (\ref{(24)}), the terms \begin{equation}
\label{(27)}\left\langle +\right| \sigma _a\left| -\right\rangle
\left\langle -\right| \sigma _b\left| +\right\rangle \quad ;\quad
\left\langle -\right| \sigma _a\left| +\right\rangle \left\langle +\right|
\sigma _b\left| -\right\rangle \end{equation} which represent interference effects acting upon only one system.
On the other hand, if we realize an experiment where {\it many systems} are measured simultaneously, then we expect the terms (\ref{(27)}) to appear, even if representing a correlation between {\it different systems}. This correlation, evidently, cannot be used to probe the non-local character of a theory.
The discussion above shows that quantum mechanics is a local theory because it satisfies Bells inequality (in fact it remains to prove, by experiments, if the statistical interpretation adopted for the state vector is correct).
Many experiments were realized to demonstrate the violation of Bell's ine\-qua\-li\-ty\cite{22}-\cite{30}; nevertheless, all these experiments, as far as we could see, perform measurements over various systems at a time, bringing results which agree perfectly well with quantum mechanic's predictions, as expected\cite{25}. We here propose that experiments be made in which the systems are measured {\it one by one} in order to confirm result (\ref{(26)} ) and validate the statistical interpretation of the state vector proposed. Some of these experiments have already been done. Indeed, experiments on Interrupted Florescence\cite{31}-\cite{34} show that the hypotesis made in expression (\ref{(23)}) about the correct appearance of the probability amplitudes for a single system is adequate.
\section{Bohm's Theory}
Bohm's theory has been considered an authentic hidden variable theory ever since it was published in 1952. It was this theory that resurrected the discussion about the possibility of hidden variable theories.
In this section we will give an argument in order to show that this theory may not be considered a true hidden variable theory. For this, we will present it shortly.
{}From Schroedinger's second equation, and writing the amplitude of probability as \begin{equation} \label{(28)}\Psi =R\left( x\right) \exp \left[ iS\left( x\right) /\hbar \right] , \end{equation} where $R\left( x\right) $ and $S\left( x\right) $ are real functions, Bohm obtains, equating the real and imaginary terms to zero, the following equations \begin{equation} \label{(29)}\frac{\partial S}{\partial t}+\frac{\left( \nabla S\right) ^2} m+V+Q=0, \end{equation} \begin{equation} \label{(30)}\frac{\partial P}{\partial t}+\nabla \left( P\frac{\nabla S} m\right) =0, \end{equation} where $V\left( x\right) $ is the classical potential (we have done this in reverse order in (I) and (II)). He calls $Q\left( x\right) $ the quantum potential, defined as \begin{equation} \label{(31)}Q=-\frac{\hbar ^2}{2mR}\nabla ^2R \end{equation} and \begin{equation} \label{(32)}P\left( x;t\right) =R\left( x;t\right) ^2=\Psi ^{*}\left( x;t\right) \Psi \left( x;t\right) . \end{equation}
Using Hamilton-Jacobi formalism, the equation \begin{equation} \label{(33)}m\frac{d^2x}{dt^2}=-\nabla \left( V+Q\right) , \end{equation} is obtained as subjected to the initial condition \begin{equation} \label{(34)}p=-\nabla S. \end{equation}
Now assuming that $x$ represents a single particle's coordinate and $p$ it's momentum, a spatial-temporal description similar to Newton's is obtained.
Yet this association of meanings cannot be made within the realm of the present theory. In fat, the starting point was an equation that presents dispersion relations according to the uncertainty principle; this dispersion is included in the probability amplitude and shows up in the constituent terms of (\ref{(28)}) and in equations (\ref{(29)}) and (\ref{(30)}). More still, in the first article of this series it was demonstrated that Schroedinger's Second Equation was not satisfied for s {\it ensembles}; for this reason equations (\ref{(29)}) and (\ref{(30)}) cannot be associated to these {\it ensembles}. It is symptomatic that Bohm's theory cannot be obtained from the equation for the density function, which is dispersion free\cite{35}.
As was said in the first article, the equations satisfied by the individual constituents of the systems are Newton's equations themselves. The equations above represent no more than the representation of the {\it ensemble's} behavior. The analysis of the double slit problem according to this theory\cite{36} demonstrate this behavior clearly; it is also important to note that the trajectories cannot be considered the real trajectories of the systems' components, since we are using Schroedinger Second equation for the probability amplitudes which is adequate only to describe the passage from one equilibrium situation to another. According to equation (\ref{(34)}), one cannot determine exactly the initial conditions of one of the {\it ensemble's} trajectory in particular, therefore leaving at least one hidden variable which the formalism is not capable of revealing.
The fact that it is not possible to find a well defined physical source for the quantum potential must also be stressed.
In this manner one concludes that Bohm's theory is not a hidden variable theory and that the quantum potential should be interpreted as no more than a fictitious ''potential'' representing a statistical field associated to a particular problem's specific configuration.
It's non-local character is easily explained, once we have interpreted the potential $Q\left( x\right) $ as a statistical potential. For Bohm's theory for an {\it ensemble} of two particle systems, we have the following equations \begin{equation} \label{(35)}\frac{d{\bf X}_1}{dt}=\rho \left( {\bf X}_1,{\bf X}_2\right) ^{-1}Im\sum_{ij}\Psi _{ij}^{*}\left( {\bf X}_1,{\bf X}_2\right) \frac \partial {\partial {\bf X}_1}\Psi _{ij}\left( {\bf X}_1,{\bf X}_2\right) , \end{equation} \begin{equation} \label{(36)}\frac{d{\bf X}_2}{dt}=\rho \left( {\bf X}_1,{\bf X}_2\right) ^{-1}Im\sum_{ij}\Psi _{ij}^{*}\left( {\bf X}_1,{\bf X}_2\right) \frac \partial {\partial {\bf X}_2}\Psi _{ij}\left( {\bf X}_1,{\bf X}_2\right) , \end{equation} where \begin{equation}
\label{(37)}\rho \left( {\bf X}_1,{\bf X}_2\right) =\sum_{ij}\left| \Psi _{ij}\left( {\bf X}_1,{\bf X}_2\right) \right| ^2 \end{equation} and ${\bf X}_1$ and ${\bf X}_2$ represent the particles' ''positions''\cite{ 37}. It is clear that, for such a system, every time that we cannot write the density as a product \begin{equation} \label{(38)}\Psi _{ij}\left( {\bf X}_1,{\bf X}_2\right) =\Phi \left( {\bf X} _1\right) \Xi \left( {\bf X}_2\right) , \end{equation} we will have the equations for ${\bf X}_1$ dependent upon ${\bf X}_2$ and {\it vice-versa}, thus showing a non-local character.
As we have said, the potential $Q\left( x\right) $ represents the statistical field associated to the {\it ensemble}. We should consider equations (\ref{(35)}) and (\ref{(36)}) as representing only the property of conditional probabilities. In other words, if we fix the statistical behavior for one of the particles, we will know the statistical behavior of the other\cite{38}.
\section{Conclusions}
In this series of papers we have presented a complete reconstruction of quantum mechanic's principles. In this and the other papers, we demonstrated that it is possible to interpret quantum from a Realist point of view. It was also shown that the formalism itself obtained embraces that of usual quantum mechanics as a particular one. Quantum mechanics was shown to be local and, although statistical by principle, based on a deterministic theory that is nothing more than newtonian classical mechanics. This approach has also made possible for us to obtain a general relativistic quantum theory for {\it ensembles} of systems with one particle.
All this collected, we are in position to maintain Einstein's definition of reality\cite{17}. His affirmation about the incompleteness of quantum mechanics, with its Complete Sets of Commuting Operators, that is, based on the Schroe\-din\-ger equations for the probability amplitudes, can be interpreted as a straightforward implication of the formalism connected with Heisemberg's uncertainty relations.
Quantum mechanics, as represented by the first Schroedinger's equations, was shown to be local and based on a deterministic Nature. Classical ontology must be reconsidered\cite{39}.
\end{document} | arXiv |
\begin{document}
\title[Hypergeometric functions and a family of algebraic curves] {Hypergeometric functions and a family of algebraic curves} \author{Rupam Barman} \address{Department of Mathematical Sciences, Tezpur University, Napaam-784028, Sonitpur, Assam, INDIA} \email{[email protected]} \author{Gautam Kalita} \address{Department of Mathematical Sciences, Tezpur University, Napaam-784028, Sonitpur, Assam, INDIA} \email{[email protected]} \vspace*{0.7in} \begin{center}
{\bf HYPERGEOMETRIC FUNCTIONS \;\lower\plusheight\hbox{$+$}\;\AND A FAMILY OF ALGEBRAIC CURVES}\;\lower\plusheight\hbox{$+$}\;[5mm]
Rupam Barman and Gautam Kalita\;\lower\plusheight\hbox{$+$}\;[.2cm] \end{center}
\noindent\textbf{Abstract:} Let $\lambda \in \mathbb{Q}\setminus \;\lower\plusheight\hbox{$+$}\;{0, 1\;\lower\plusheight\hbox{$+$}\;}$ and $l \geq 2$, and denote by $C_{l,\lambda}$ the nonsingular projective algebraic curve over $\mathbb{Q}$ with affine equation given by $$y^l=x(x-1)(x-\lambda).$$ In this paper we define $\Omega(C_{l, \lambda})$ analogous to the real periods of elliptic curves and find a relation with ordinary hypergeometric series. We also give a relation between the number of points on $C_{l, \lambda}$ over a finite field and Gaussian hypergeometric series. Finally we give an alternate proof of a result of \cite{rouse}.
\noindent{\footnotesize \textbf{Key Words}: algebraic curves; hypergeometric series.}
\noindent{\footnotesize 2010 Mathematics Classification Numbers: 11G20, 33C20}
\section{\bf Introduction}\label{secone}
Hypergeometric functions and their relations with algebraic curves have been studied by many mathematicians. For $a_0, a_1, \ldots, a_r, b_1, b_2, \ldots, b_r \in \mathbb{C}$, the ordinary hypergeometric series ${_{r+1}}F_r$ is defined as $${_{r+1}}F_r\left(\begin{array}{cccc}
a_0, & a_1, & \cdots, & a_r\;\lower\plusheight\hbox{$+$}\;
& b_1, & \cdots, & b_r
\end{array}\mid z \right):
=\sum_{n=0}^{\infty}\frac{(a_0)_n(a_1)_n\cdots (a_r)_n}{(b_1)_n(b_2)_n\cdots (b_r)_n}\frac{z^n}{n!},$$
where $(a)_0=1$, $(a)_n:=a(a+1)(a+2)\ldots(a+n-1)$ for $n\geq 1$, and none of the $b_i$ is a negative integer or zero. This hypergeometric series converges absolutely for $|z|<1$. The series also converges absolutely for
$|z|=1$ if $\text{Re}(\sum b_i - \sum a_i)> 0$ and converges conditionally for $|z|=1$, $z\neq 1$ if $0\geq \text{Re}(\sum b_i - \sum a_i)> -1$. For details see \cite[chapter 2]{andrews}.
In \cite{greene} Greene introduced the notion of Gaussian hypergeometric series over finite fields. Since then, the interplay between ordinary hypergeometric series and Gaussian hypergeometric series has played an important role in character sum evaluation \cite{greenestanton}, the representation theory of $SL(2, \mathbb{R})$ \cite{greene1} and finding the number of points on an algebraic curve over finite fields \cite{ono}. Recently, J. Rouse \cite{rouse} and D. McCarthy \cite{mccarthy} provided an expression for the real period of certain families of elliptic curves in terms of ordinary hypergeometric series. They also provided an analogous expression for the trace of Frobenius of the same family of curves in terms of Gaussian hypergeometric series, which developed the interplay between the two hypergeometric series more fully.
We will now restate some definitions from \cite{greene} which are analogous to the binomial coefficient and ordinary hypergeometric series respectively. Throughout the paper $p$ is an odd prime. We also let $\mathbb{F}_p$ denote the finite field with $p$ elements and we extend all characters $\chi$ of $\mathbb{F}_p^\times$ to $\mathbb{F}_p$ by setting $\chi(0)=0$. For characters $A$ and $B$ of $\mathbb{F}_p$, define ${A \choose B}$ as \begin{align}\label{eq0} {A \choose B}:=\frac{B(-1)}{p}J(A,\overline{B})=\frac{B(-1)}{p}\sum_{x \in \mathbb{F}_p}A(x)\overline{B}(1-x), \end{align} where $J(A, B)$ is the Jacobi sum of the characters $A$ and $B$ of $\mathbb{F}_p$ and $\overline{B}$ is the inverse of $B$. With this notation, for characters $A_0, A_1,\ldots, A_n$ and $B_1, B_2,\ldots, B_n$ of $\mathbb{F}_p$, the Gaussian hypergeometric series ${_{n+1}}F_n\left(\begin{array}{cccc}
A_0, & A_1, & \cdots, & A_n\;\lower\plusheight\hbox{$+$}\;
& B_1, & \cdots, & B_n
\end{array}\mid x \right)$ over $\mathbb{F}_p$ is defined as \begin{align}\label{eq00} {_{n+1}}F_n\left(\begin{array}{cccc}
A_0, & A_1, & \cdots, & A_n\;\lower\plusheight\hbox{$+$}\;
& B_1, & \cdots, & B_n
\end{array}\mid x \right):
=\frac{p}{p-1}\sum_{\chi}{A_0\chi \choose \chi}{A_1\chi \choose B_1\chi}
\cdots {A_n\chi \choose B_n\chi}\chi(x), \end{align} where the sum is over all characters $\chi$ of $\mathbb{F}_p$.
Let $\lambda \in \mathbb{Q}\setminus\;\lower\plusheight\hbox{$+$}\;{0, 1\;\lower\plusheight\hbox{$+$}\;}$ and $l \geq 2$, and denote by $C_{l,\lambda}$ the nonsingular projective algebraic curve over $\mathbb{Q}$ with affine equation given by \begin{align}\label{curve1} y^l=x(x-1)(x-\lambda). \end{align} The change of variables $(x, y) \mapsto (x+\frac{1+\lambda}{3}, \frac{y}{2})$ takes \eqref{curve1} to \begin{align}\label{curve2} y^l=2^l(x-a)(x-b)(x-c), \end{align} where $a=-\frac{1+\lambda}{3}$, $b=\frac{2\lambda-1}{3}$, and $c=\frac{2-\lambda}{3}$.
We now define an integral for the family of curves \eqref{curve1} analogous to the real period of elliptic curves. \begin{definition} The complex number $\Omega(C_{l,\lambda})$ is defined as \begin{align}\label{definition1} \Omega(C_{l, \lambda}):=2\int_a^b\frac{dx}{y^{l-1}}, \end{align} where $x$ and $y$ are related as in \eqref{curve2}. \end{definition} \begin{definition} Suppose $p$ is a prime of good reduction for $C_{l, \lambda}$. Define the integer $a_p(C_{l, \lambda})$ by \begin{align}\label{eq45} a_p(C_{l, \lambda}):=1+p-\;\lower\plusheight\hbox{$+$}\;#C_{l, \lambda}(\mathbb{F}_p), \end{align} where $\;\lower\plusheight\hbox{$+$}\;#C_{l, \lambda}(\mathbb{F}_p)$ denotes the number of points that the curve $C_{l, \lambda}$ has over $\mathbb{F}_p$. \end{definition} It is clear that a prime $p$ not dividing $l$ is of good reduction for $C_{l, \lambda}$ if and only if $\text{ord}_p(\lambda(\lambda-1))=0.$ \begin{remark} Let $l \neq 3$. Then $$\;\lower\plusheight\hbox{$+$}\;#C_{l, \lambda}(\mathbb{F}_p)=1+\;\lower\plusheight\hbox{$+$}\;#\;\lower\plusheight\hbox{$+$}\;{(x, y)\in \mathbb{F}_p^2: y^l=x(x-1)(x-\lambda)\;\lower\plusheight\hbox{$+$}\;}.$$ Indeed, for $l\geq 4$, the point $[1:0:0]$ is the only point at infinity. Similarly, if $l=2$, the point at infinity is $[0:1:0]$.
Let $l=3$ and $p\equiv 1$ $($\emph{mod} $3)$. Let $\omega \in \mathbb{F}_p^{\times}$ be of order $3$. Then there are three points at infinity, namely, $[1:1:0], [1:\omega: 0],$ and $[1:\omega^2:0]$. Hence, in this case, $$\;\lower\plusheight\hbox{$+$}\;#C_{l, \lambda}(\mathbb{F}_p)=3+\;\lower\plusheight\hbox{$+$}\;#\;\lower\plusheight\hbox{$+$}\;{(x, y)\in \mathbb{F}_p^2: y^l=x(x-1)(x-\lambda)\;\lower\plusheight\hbox{$+$}\;}.$$
Again, if $l=3$ and $p\equiv 2$ $($\emph{mod} $3)$, then the point at infinity is $[1:1:0]$. \end{remark} \begin{remark}\label{lem9} If $l=3$, $C_{l, \lambda}$ is an elliptic curve. Dehomogenizing the projective curve $C_{3, \lambda}: Y^3=X(X-Z)(X-\lambda Z)$ by putting $X=1$ and then making the substitution $$Y\rightarrow \lambda x, Z\rightarrow \lambda\left(y+\dfrac{1+\lambda}{2\lambda^2}\right),$$ we find that $C_{3, \lambda}$ is isomorphic over $\mathbb{Q}$ to the elliptic curve \begin{align}\label{curve3} y^2=x^3+\left(\frac{\lambda - 1}{2\lambda^2}\right)^2. \end{align} \end{remark} \begin{remark} If $l=2$, then equation \eqref{curve1} gives an elliptic curve in Legendre normal form with real period $\Omega(C_{2, \lambda})$, and $a_p(C_{2, \lambda})$ is the trace of the Frobenius endomorphism on the curve over $\mathbb{F}_p$. \end{remark}
We now recall two results relating $\Omega(C_{2, \lambda})$ and $a_p(C_{2, \lambda})$ to hypergeometric series. \begin{theorem}\cite{housemoller,rouse}\label{theorem1} If $0<\lambda<1$, then the real period $\Omega(C_{2, \lambda})$ satisfies $$\frac{\Omega(C_{2, \lambda})}{\pi}={_{2}}F_1\left(\begin{array}{cccc}
1/2, & 1/2\;\lower\plusheight\hbox{$+$}\;
& 1
\end{array}\mid \lambda \right).$$ \end{theorem} \begin{theorem}\cite{koike, ono}\label{theorem2} If \emph{ord}$_p(\lambda(\lambda-1))=0$, then $$-\frac{\phi(-1)a_p(C_{2, \lambda})}{p}={_{2}}F_1\left(\begin{array}{cccc}
\phi, & \phi\;\lower\plusheight\hbox{$+$}\;
& \epsilon
\end{array}\mid \lambda \right),$$ where $\phi$ and $\epsilon$ are the quadratic and trivial characters of $\mathbb{F}_p$ respectively. \end{theorem} In this paper we generalize these results to the algebraic curves $C_{l, \lambda}$. The aim of this paper is to prove the following main results. \begin{theorem}\label{theorem3} If $0< \lambda<1$, then $\Omega(C_{l, \lambda})$ is given by $$\Omega(C_{l, \lambda}) =\frac{(\Gamma(\frac{1}{l}))^2}{2^{l-2}\lambda^{\frac{l-2}{l}} \Gamma(\frac{2}{l})}\cdot{_{2}}F_1\left(\begin{array}{cccc}
(l-1)/l, & 1/l\;\lower\plusheight\hbox{$+$}\;
& 2/l
\end{array}\mid \lambda \right).$$ \end{theorem} \begin{theorem}\label{theorem4} If $p\equiv 1$ $($\emph{mod} $l)$ and \emph{ord}$_p(\lambda(\lambda-1))=0$, then $a_p(C_{l, \lambda})$ satisfies $$ -a_p(C_{l, \lambda})=\left\;\lower\plusheight\hbox{$+$}\;{ \begin{array}{ll} p\cdot\displaystyle\sum_{i=1}^{l-1}\chi^i(-\lambda^2){_{2}}F_1\left(\begin{array}{cccc}
\overline{\chi^i}, & \chi^i\;\lower\plusheight\hbox{$+$}\;
& \chi^{2i}
\end{array}\mid \lambda \right), & \hbox{if $l\neq 3$;} \;\lower\plusheight\hbox{$+$}\;
2+ p\cdot\displaystyle\sum_{i=1}^{l-1}\chi^i(-\lambda^2){_{2}}F_1\left(\begin{array}{cccc}
\overline{\chi^i}, & \chi^i\;\lower\plusheight\hbox{$+$}\;
& \chi^{2i}
\end{array}\mid \lambda \right), & \hbox{if $l=3$,}
\end{array}
\right.$$ where $\chi$ is a character of $\mathbb{F}_p$ of order $l$. \end{theorem} \begin{theorem}\label{theorem5} For $\lambda=\frac{1}{2}$, we have \begin{align}\label{eq1/2} \frac{2^{\frac{(l-3)(l-1)}{l}}\Gamma(\frac{2}{l})}{(\Gamma(\frac{1}{l}))^2}\cdot\Omega(C_{l, \lambda}) =\frac{\displaystyle \binom{\frac{1}{2l}}{\frac{1}{l}}}{\displaystyle \binom{\frac{3-2l}{2l}}{\frac{2-l}{l}}}. \end{align} Moreover, if $p\equiv 1$ $($\emph{mod} $l)$, then \begin{align}\label{eq1/22} -a_p(C_{l, \lambda})=\left\;\lower\plusheight\hbox{$+$}\;{\begin{array}{lll} p\cdot\displaystyle \sum_{i=1}^{\lfloor\frac{l-1}{2}\rfloor}\chi^{-2i}(8) \left[\displaystyle \binom{\chi^i}{\chi^{-2i}}+{\phi\chi^i \choose \chi^{-2i}}\right], & \hbox{if $\frac{p-1}{l}$ is odd and $l\neq 3$;} \;\lower\plusheight\hbox{$+$}\; p\cdot\displaystyle\sum_{i=1}^{l-1}\chi^{-i}(8) \left[\displaystyle \binom{\sqrt{\chi^i}}{\chi^{-i}}+{\phi\sqrt{\chi^i} \choose \chi^{-i}}\right], & \hbox{if $\frac{p-1}{l}$ is even and $l\neq 3$;}\;\lower\plusheight\hbox{$+$}\; 2+p\cdot\displaystyle\sum_{i=1}^{2} \left[\displaystyle \binom{\sqrt{\chi^i}}{\chi^{-i}}+{\phi\sqrt{\chi^i} \choose \chi^{-i}}\right], & \hbox{if $l= 3$,} \end{array} \right. \end{align} where $\chi$ is a character of $\mathbb{F}_p$ of order $l$ and $\phi$ is the quadratic character. \end{theorem} Here we extend the definition of binomial coefficient to include rational arguments via $$\displaystyle \binom{n}{k}=\dfrac{\Gamma(n+1)}{\Gamma(k+1)\Gamma(n-k+1)}.$$
We also give a simple proof of the following result of J. Rouse (note that ${1/4 \choose 1/2}$ is real). \begin{theorem}\label{theorem6}\cite[Theorem 3]{rouse} If $\lambda=1/2$, then
$$\frac{\sqrt{2}}{2\pi}\cdot \Omega(C_{2, \lambda})={1/4 \choose 1/2}.$$ If $p\equiv 1$ $($\emph{mod} $4)$,
then $$\frac{-\phi(-2)}{2p}\cdot a_p(C_{2, \lambda})=\emph{Re}{\chi_4 \choose \phi},$$ where $\chi_4$ is a character on $\mathbb{F}_p$ of order $4$ and $\phi$ is the quadratic character. \end{theorem}
\section{\bf Preliminaries}
We start with a result which enables us to count the number of points on a curve using multiplicative characters on $\mathbb{F}_p$ (see \cite[Proposition 8.1.5]{ireland}). \begin{lemma}\label{lemma1}
Let $a\in\mathbb{F}_p^{\times}$. If $n|(p-1)$, then $$\;\lower\plusheight\hbox{$+$}\;#\;\lower\plusheight\hbox{$+$}\;{x\in\mathbb{F}_p: x^n=a\;\lower\plusheight\hbox{$+$}\;}=\sum \chi(a),$$ where the sum runs over all characters $\chi$ on $\mathbb{F}_p$ of order dividing $n$. \end{lemma} Now we recall some standard facts regarding ordinary and Gaussian hypergeometric series. First, the ordinary ${_{2}}F_1$ hypergeometric series has the following integral representation \cite[p. 115]{erdelyi}: \begin{align}\label{eq1} {_{2}}F_1\left(\begin{array}{cccc}
a, & b\;\lower\plusheight\hbox{$+$}\;
& c
\end{array}\mid x\right):=\frac{2\Gamma(c)}{\Gamma(b)\Gamma(c-b)}\int_0^{\pi/2}
\frac{(\sin t)^{2b-1}(\cos t)^{2c-2b-1}}{(1-x\sin^2t)^a}dt, \end{align} where Re $c>$ Re $b>0$. Also, by \eqref{eq0} and \eqref{eq00} the Gaussian ${_{2}}F_1$ hypergeometric series over $\mathbb{F}_p$ takes the form \begin{align}\label{eq11} {_{2}}F_1\left(\begin{array}{cccc}
A, & B\;\lower\plusheight\hbox{$+$}\;
& C
\end{array}\mid x \right)=\epsilon(x)\frac{BC(-1)}{p}
\sum_{y\in\mathbb{F}_p}B(y)\overline{B}C(1-y)\overline{A}(1-xy), \end{align} where $\epsilon$ denotes the trivial character.
Next we note two transformation properties of ordinary hypergeometric series. Kummer's Theorem \cite[p. 9]{bailey} is given by \begin{align}\label{eq2} {_{2}}F_1\left(\begin{array}{cccc}
a, & b\;\lower\plusheight\hbox{$+$}\;
& 1+b-a
\end{array}\mid -1 \right)=\frac{\Gamma(1+b-a)\Gamma(1+\frac{b}{2})}{\Gamma(1+b)\Gamma(1+\frac{b}{2}-a)}, \end{align} while Pfaff's transformation \cite[p. 31]{slater} can be stated as \begin{align}\label{eq3} {_{2}}F_1\left(\begin{array}{cccc}
a, & b\;\lower\plusheight\hbox{$+$}\;
& c
\end{array}\mid x \right)=(1-x)^{-a}{_{2}}F_1\left(\begin{array}{cccc}
a, & c-b\;\lower\plusheight\hbox{$+$}\;
& c
\end{array}\mid \frac{x}{x-1} \right). \end{align} Greene \cite[p. 91]{greene} proved the following Gaussian analogs of these transformations: \begin{align}\label{eq4} {_{2}}F_1\left(\begin{array}{cccc}
A, & B\;\lower\plusheight\hbox{$+$}\;
& \overline{A}B
\end{array}\mid -1 \right)=\left\;\lower\plusheight\hbox{$+$}\;{
\begin{array}{ll}
0, & \hbox{if $B$ is not a square;} \;\lower\plusheight\hbox{$+$}\;
\displaystyle \binom{C}{A}+
\displaystyle \binom{\phi C}{A}, & \hbox{if $B=C^2$}
\end{array}
\right. \end{align} and \begin{align}\label{eq5} {_{2}}F_1\left(\begin{array}{cccc}
A, & \overline{A}\;\lower\plusheight\hbox{$+$}\;
& \overline{A}B
\end{array}\mid \frac{1}{2} \right)=A(-2)\left\;\lower\plusheight\hbox{$+$}\;{
\begin{array}{ll}
0, & \hbox{if $B$ is not a square;} \;\lower\plusheight\hbox{$+$}\;
\displaystyle \binom{C}{A}+
\displaystyle \binom{\phi C}{A}, & \hbox{if $B=C^2$,}
\end{array}
\right. \end{align} where $\phi$ is the quadratic character of $\mathbb{F}_p$.
\section{\bf Proof of the results} \begin{pf}{\bf \ref{theorem3}.} Recalling \eqref{curve2}, from the definition of $\Omega(C_{l, \lambda})$, we have \begin{align} \Omega(C_{l, \lambda})&=2\int_a^b\frac{dx}{y^{l-1}}\nonumber \;\lower\plusheight\hbox{$+$}\; &=2\int_a^b\frac{dx}{2^{l-1}\;\lower\plusheight\hbox{$+$}\;{(x-a)(x-b)(x-c)\;\lower\plusheight\hbox{$+$}\;}^{\frac{l-1}{l}}}.\nonumber \end{align} Note that for $a<x<b$ and $0< \lambda <1$, $(x-a)$ is positive, while $(x-b)$ and $(x-c)$ are negative. Hence $\Omega(C_{l, \lambda})$ is real.
Putting $(x-a)=(b-a)\sin^2\theta$, we obtain \begin{align} \Omega(C_{l, \lambda})&=2\int_0^{\pi/2}\frac{2(b-a)\sin\theta \cos\theta}{2^{l-1} [(b-a)\sin^2\theta (b-a)\cos^2\theta\;\lower\plusheight\hbox{$+$}\;{(c-a)-(b-a)\sin^2\theta\;\lower\plusheight\hbox{$+$}\;}]^{\frac{l-1}{l}}}d\theta\nonumber\;\lower\plusheight\hbox{$+$}\; &=\frac{1}{2^{l-3}}\int_0^{\pi/2}\frac{(b-a)^{\frac{2-l}{l}}(\sin\theta)^{\frac{2-l}{l}}(\cos\theta)^{\frac{2-l}{l}}} {\;\lower\plusheight\hbox{$+$}\;{(c-a)-(b-a)\sin^2\theta\;\lower\plusheight\hbox{$+$}\;}^{\frac{l-1}{l}}}d\theta.\nonumber \end{align} Using $(b-a)=\lambda$ and $(c-a)=1$ yields \begin{align} \Omega(C_{l,\lambda})&=\frac{1}{2^{l-3}\lambda^{\frac{l-2}{l}}}\int_0^{\pi/2}\frac{(\sin\theta)^{2\frac{1}{l}-1} (\cos\theta)^{2\frac{2}{l}-2\frac{1}{l}-1}}{(1-\lambda \sin^2\theta)^{\frac{l-1}{l}}}d\theta\nonumber\;\lower\plusheight\hbox{$+$}\; &=\frac{(\Gamma(\frac{1}{l}))^2}{2^{l-2}\lambda^{\frac{l-2}{l}} \Gamma(\frac{2}{l})}\cdot{_{2}}F_1\left(\begin{array}{cccc}
(l-1)/l, & 1/l\;\lower\plusheight\hbox{$+$}\;
& 2/l
\end{array}\mid \lambda \right),\nonumber \end{align} where the last equality follows from \eqref{eq1}. This completes the proof of the theorem. \end{pf} \begin{remark} If we put $l=2$ in Theorem \ref{theorem3}, we obtain Theorem \ref{theorem1}. \end{remark} \begin{remark} As mentioned in Remark \ref{lem9}, $C_{3, \lambda}$ is isomorphic over $\mathbb{Q}$ to the elliptic curve \eqref{curve3}. It would be interesting to know if there is any relation between $\Omega(C_{3, \lambda})$ and the real period of \eqref{curve3}. \end{remark} \begin{pf}{\bf \ref{theorem4}.} Since $p\equiv 1 ~(\text{mod}~l)$, there exists a character $\chi$ of order $l$ on $\mathbb{F}_p$. Using \eqref{eq11}, we have \begin{align} &\sum_{i=1}^{l-1}\chi^i(-\lambda^2){_{2}}F_1\left(\begin{array}{cccc}
\overline{\chi^i}, & \chi^i\;\lower\plusheight\hbox{$+$}\;
& \chi^{2i}
\end{array}\mid \lambda \right)\notag\;\lower\plusheight\hbox{$+$}\; &=\sum_{i=1}^{l-1}\chi^i(-\lambda^2)\frac{\chi^i\chi^{2i}(-1)}{p}\sum_{t\in\mathbb{F}_p}\chi^i(t) \overline{\chi^i}\chi^{2i}(1-t)\overline{\overline{\chi^i}} (1-\lambda t)\nonumber\;\lower\plusheight\hbox{$+$}\; &=\sum_{i=1}^{l-1}\chi^i(-\lambda^2)\frac{\chi^{3i}(-1)}{p}\sum_{t\in\mathbb{F}_p}\chi^i(t)\chi^i(1-t) \chi^i(1-\lambda t).\nonumber \end{align} Replacing $t$ by $\frac{t}{\lambda}$, we get \begin{align}\label{04} p\cdot\sum_{i=1}^{l-1}\chi^i(-\lambda^2){_{2}}F_1\left(\begin{array}{cccc}
\overline{\chi^i}, & \chi^i\;\lower\plusheight\hbox{$+$}\;
& \chi^{2i}
\end{array}\mid \lambda \right)&=\sum_{i=1}^{l-1}\sum_{t\in\mathbb{F}_p}
\chi^i(t(t-1)(t-\lambda))\nonumber\;\lower\plusheight\hbox{$+$}\; &=\sum_{t\in\mathbb{F}_p}\sum_{i=1}^{l-1}\chi^i(t(t-1)(t-\lambda)). \end{align} Moreover, \begin{align} &\;\lower\plusheight\hbox{$+$}\;#\;\lower\plusheight\hbox{$+$}\;{(x, y)\in \mathbb{F}_p^2: y^l=x(x-1)(x-\lambda)\;\lower\plusheight\hbox{$+$}\;}\nonumber\;\lower\plusheight\hbox{$+$}\; &=\sum_{t\in\mathbb{F}_p}\;\lower\plusheight\hbox{$+$}\;#\;\lower\plusheight\hbox{$+$}\;{y\in\mathbb{F}_p: y^l=t(t-1)(t-\lambda)\;\lower\plusheight\hbox{$+$}\;}\nonumber\;\lower\plusheight\hbox{$+$}\; &=\sum_{t\in\mathbb{F}_p, t(t-1)(t-\lambda)\neq0} \;\lower\plusheight\hbox{$+$}\;#\;\lower\plusheight\hbox{$+$}\;{y\in\mathbb{F}_p: y^l=t(t-1)(t-\lambda)\;\lower\plusheight\hbox{$+$}\;}+\;\lower\plusheight\hbox{$+$}\;#\;\lower\plusheight\hbox{$+$}\;{t\in\mathbb{F}_p: t(t-1)(t-\lambda)=0\;\lower\plusheight\hbox{$+$}\;}.\nonumber \end{align} Now applying Lemma \ref{lemma1} and \eqref{04}, we obtain \begin{align} &\;\lower\plusheight\hbox{$+$}\;#\;\lower\plusheight\hbox{$+$}\;{(x, y)\in \mathbb{F}_p^2: y^l=x(x-1)(x-\lambda)\;\lower\plusheight\hbox{$+$}\;}\nonumber\;\lower\plusheight\hbox{$+$}\; &=\sum_{t\in\mathbb{F}_p}\sum_{i=0}^{l-1}\chi^i(t(t-1)(t-\lambda))+ \;\lower\plusheight\hbox{$+$}\;#\;\lower\plusheight\hbox{$+$}\;{t\in\mathbb{F}_p: t(t-1)(t-\lambda)=0\;\lower\plusheight\hbox{$+$}\;}\nonumber\;\lower\plusheight\hbox{$+$}\; &=p+\sum_{t\in\mathbb{F}_p}\sum_{i=1}^{l-1}\chi^i(t(t-1)(t-\lambda))\nonumber\;\lower\plusheight\hbox{$+$}\; &=p+p\cdot\sum_{i=1}^{l-1}\chi^i(-\lambda^2){_{2}}F_1\left(\begin{array}{cccc}
\overline{\chi^i}, & \chi^i\;\lower\plusheight\hbox{$+$}\;
& \chi^{2i}
\end{array}\mid \lambda \right).\nonumber \end{align} Since ord$_p(\lambda(\lambda-1))=0$, using \eqref{eq45} we complete the proof of the result. \end{pf} \begin{remark} Theorem \ref{theorem2} can be obtained from Theorem \ref{theorem4} by putting $l=2$. Note that for the quadratic character $\phi$ of $\mathbb{F}_p$, we have $\phi(-\lambda^2)=\phi(-1)$. \end{remark} For $l\geq 3$, the genus of the curve $C_{l, \lambda}$ is $\frac{(l-1)(l-2)}{2}$. The Hasse-Weil bound therefore yields the following result. \begin{corollary} Suppose $l\geq 4$. If $p\equiv 1$ $($\emph{mod} $l)$ and \emph{ord}$_p(\lambda(\lambda-1))=0$, then
$$\left |\sum_{i=1}^{l-1}\chi^i(-\lambda^2){_{2}}F_1\left(\begin{array}{cccc}
\overline{\chi^i}, & \chi^i\;\lower\plusheight\hbox{$+$}\;
& \chi^{2i}
\end{array}\mid \lambda \right)\right|\leq \frac{(l-1)(l-2)}{\sqrt{p}},$$
where $\chi$ is a character of $\mathbb{F}_p$ of order $l$.
If $l=3$, then $$ \left|2+p\cdot\sum_{i=1}^{2}\chi^i(-\lambda^2){_{2}}F_1\left(\begin{array}{cccc}
\overline{\chi^i}, & \chi^i\;\lower\plusheight\hbox{$+$}\;
& \chi^{2i}
\end{array}\mid \lambda \right)\right|\leq 2{\sqrt{p}},$$
where $\chi$ is a character of $\mathbb{F}_p$ of order $3$. \end{corollary} \begin{corollary} If $p\equiv 1~($\emph{mod} $3)$ and $x^2+3y^2=p$, then $$p\cdot\sum_{i=1}^{2}{_{2}}F_1\left(\begin{array}{cccc}
\overline{\chi^i}, & \chi^i\;\lower\plusheight\hbox{$+$}\;
& \chi^{2i}
\end{array}\mid -1 \right)=(-1)^{x+y}\left(\frac{x}{3}\right)\cdot 2x-2,$$
where $\chi$ is a character of $\mathbb{F}_p$ of order $3$. \end{corollary} \begin{proof} As mentioned in Remark \ref{lem9}, $C_{3, -1}$ is isomorphic over $\mathbb{Q}$ to the elliptic curve $y^2=x^3+1.$ By \cite[Proposition 2]{ono}, it is known that $a_p(C_{3, -1})=(-1)^{x+y-1}(\frac{x}{3})\cdot 2x$. Now the result follows from Theorem \ref{theorem4}. \end{proof} \begin{remark} The formula for $a_p(C_{3,\lambda})$ in Theorem \ref{theorem4} gives the trace of Frobenius of the family of elliptic curves \eqref{curve3}. \end{remark} \begin{pf}{\bf \ref{theorem5}.} By \eqref{eq2}, we have \begin{align}\label{for-1} {_{2}}F_1\left(\begin{array}{cccc}
(l-1)/l, & 1/l\;\lower\plusheight\hbox{$+$}\;
& 2/l
\end{array}\mid -1 \right)&=\frac{\Gamma(\frac{2}{l})\Gamma(\frac{2l+1}{2l})}{\Gamma(\frac{l+1}{l})
\Gamma(\frac{3}{2l})}\nonumber\;\lower\plusheight\hbox{$+$}\; &=\frac{\frac{\Gamma(\frac{2l+1}{2l})}{\Gamma(\frac{l+1}{l})\Gamma(\frac{2l-1}{2l})}} {\frac{\Gamma(\frac{3}{2l})}{\Gamma(\frac{2}{l})\Gamma(\frac{2l-1}{2l})}}\nonumber\;\lower\plusheight\hbox{$+$}\; &=\frac{\displaystyle \binom{\frac{1}{2l}}{\frac{1}{l}}}{\displaystyle \binom{\frac{3-2l}{2l}}{\frac{2-l}{l}}}. \end{align} Putting $\lambda=1/2$ in Theorem \ref{theorem3}, we obtain the relation $$\frac{2^{\frac{l^2-3l+2}{l}}\Gamma(\frac{2}{l})}{(\Gamma(\frac{1}{l}))^2}\cdot\Omega(C_{l,\frac{1}{2}}) ={_{2}}F_1\left(\begin{array}{cccc}
(l-1)/l, & 1/l\;\lower\plusheight\hbox{$+$}\;
& 2/l
\end{array}\mid \frac{1}{2} \right).$$ Then using \eqref{eq3}, we find that \begin{align}\label{eq46} \frac{2^{\frac{l^2-3l+2}{l}}\Gamma(\frac{2}{l})}{(\Gamma(\frac{1}{l}))^2}\cdot \Omega(C_{l,\frac{1}{2}})&=2^{\frac{l-1}{l}}{_{2}}F_1\left(\begin{array}{cccc}
(l-1)/l, & 1/l\;\lower\plusheight\hbox{$+$}\;
& 2/l
\end{array}\mid -1 \right). \end{align} From \eqref{for-1} and \eqref{eq46}, we complete the proof of \eqref{eq1/2}.
Now, we shall prove the second part of the result. Note that $p$ is an odd prime. Write $\chi=w^{k}$, where $w$ is a generator of the group of Dirichlet characters mod $p$. Let $o(w)$ denote the order of $w$. Then $o(w)=p-1$ and $l=o(w^{k})=(p-1)/\text{gcd}(k,p-1)$. So $(p-1)/l=\text{gcd}(k,p-1)$. If $(p-1)/l$ is even, then $k$ is also even, hence $\chi$ is a square. Conversely, if $\chi$ is a square, it is an even power of the generator $w$, hence $k$ is even, and $(p-1)/l=\text{gcd}(k, p-1)$ is even. This implies that $\chi$ is a square if and only if $(p-1)/l$ is even. Moreover, $\chi^i$ is a square for even values of $i$, and for odd values of $i$, $\chi^i$ is a square if and only if $\chi$ is a square. Using these, from Theorem \ref{theorem4} and \eqref{eq5}, we complete the proof of \eqref{eq1/22}. \end{pf} In \cite{rouse}, J. Rouse gave an analogy between ordinary hypergeometric series and Gaussian hypergeometric series by evaluating $\Omega(C_{2, \frac{1}{2}})$ and $a_p(C_{2, \frac{1}{2}})$ in terms of hypergeometric series. We now give an alternate proof of \cite[Theorem 3, p. 3]{rouse}. \begin{pf}{\bf \ref{theorem6}.} Putting $l=2$ in \eqref{eq1/2}, we obtain \begin{align} \frac{2^{-\frac{1}{2}}}{(\Gamma(\frac{1}{2}))^2}\cdot\Omega(C_{2, \frac{1}{2}})& =\frac{{\displaystyle \binom{1/4}{1/2}}}{ \displaystyle \binom{-1/4}{0}}\nonumber \end{align} which yields \begin{align} \frac{\sqrt{2}}{2\pi}\cdot\Omega(C_{2, \frac{1}{2}})&={1/4 \choose 1/2},\nonumber \end{align} since ${-1/4 \choose 0}=1$ and $\Gamma(\frac{1}{2})=\sqrt{\pi}$.
For the second part, recall that $p\equiv 1$ (mod 4). Putting $l=2$ in \eqref{eq1/22}, we find that $$\frac{-\phi(8)}{p}\cdot a_p(C_{2, \frac{1}{2}})= \displaystyle \binom{\chi_4}{\phi}+\displaystyle \binom{\phi\chi_4}{\phi},$$ since $\chi_4^2=\phi$. Clearly $\phi\chi_4=\overline{\chi_4}$, and this implies that $\displaystyle \binom{\phi\chi_4}{\phi} =\overline{\displaystyle \binom{\chi_4}{\phi}}$. Also, observing that $\phi(8)=\phi(2)$, we obtain $$\frac{-\phi(2)}{2p}\cdot a_p(C_{2, \frac{1}{2}})=\text{Re}{\chi_4 \choose \phi}.$$ Since $p\equiv 1$ (mod 4), we have that $\phi(-1)=1$ and the result follows. \end{pf} Simplifying the expressions for $a_p(C_{l, \frac{1}{2}})$ given in Theorem \ref{theorem5}, we obtain the following result which generalizes the case $l=2$, $p\equiv 1$ $($mod $4)$ treated in Theorem \ref{theorem6}. \begin{corollary}\label{cor1} Suppose that $p\equiv 1$ $($\emph{mod} $l)$. Then we have \begin{align} -a_p(C_{l, \frac{1}{2}})=\left\;\lower\plusheight\hbox{$+$}\;{\begin{array}{lllll} 2p\cdot\left[\phi(2)\emph{Re}\displaystyle{\chi_4\choose \phi}+ \displaystyle \sum_{i=1}^{\frac{l-4}{4}}\emph{Re}\left\;\lower\plusheight\hbox{$+$}\;{\chi^{-2i}(8) \left(\displaystyle \binom{\chi^i}{\chi^{-2i}}+{\phi\chi^i \choose \chi^{-2i}}\right)\right\;\lower\plusheight\hbox{$+$}\;}\right],\;\lower\plusheight\hbox{$+$}\; &\hspace{-5.2cm} \hbox{if $\frac{p-1}{l}$ is odd and $l\equiv 0~($\emph{mod} $4)$;} \;\lower\plusheight\hbox{$+$}\; 2p\cdot\displaystyle \sum_{i=1}^{\frac{l-2}{4}}\emph{Re}\left[\chi^{-2i}(8) \left(\displaystyle \binom{\chi^i}{\chi^{-2i}}+{\phi\chi^i \choose \chi^{-2i}}\right)\right],\;\lower\plusheight\hbox{$+$}\; &\hspace{-5.2cm}\hbox{if $\frac{p-1}{l}$ is odd and $l\equiv 2~($\emph{mod} $4)$;}\;\lower\plusheight\hbox{$+$}\; 2p\cdot\left[\phi(2)\emph{Re}\displaystyle{\chi_4\choose \phi}+ \displaystyle \sum_{i=1}^{\frac{l-2}{2}}\emph{Re}\left\;\lower\plusheight\hbox{$+$}\;{\psi^{-2i}(8) \left(\displaystyle \binom{\psi^i}{\psi^{-2i}}+{\phi\psi^i \choose \psi^{-2i}}\right)\right\;\lower\plusheight\hbox{$+$}\;}\right],\;\lower\plusheight\hbox{$+$}\; &\hspace{-5.2cm} \hbox{if $\frac{p-1}{l}$ and $l$ are even;}\;\lower\plusheight\hbox{$+$}\; 2p\cdot\displaystyle \sum_{i=1}^{\frac{l-1}{2}}\emph{Re}\left[\psi^{-2i}(8) \left(\displaystyle \binom{\psi^i}{\psi^{-2i}}+{\phi\psi^i \choose \psi^{-2i}}\right)\right],\;\lower\plusheight\hbox{$+$}\; &\hspace{-5.2cm}\hbox{if $\frac{p-1}{l}$ is even and $l$ is odd, $l\geq 5$;}\;\lower\plusheight\hbox{$+$}\; 2+2p\cdot \emph{Re}\left[\displaystyle {\chi \choose \chi}+ \displaystyle{\phi\chi \choose \chi}\right], &\hspace{-5.2cm}\hbox{if $l=3$;} \end{array} \right. \end{align} where $\psi, \chi, \chi_4$ are characters of $\mathbb{F}_p$ of order $2l, l, 4$ respectively and $\phi$ is the quadratic character. \end{corollary} \begin{corollary} If $p\equiv 1$ $($\emph{mod} $3)$ and $x^2+3y^2=p$, then \begin{align} p\cdot \emph{Re}\left[\displaystyle {\chi \choose \chi}+ \displaystyle{\phi\chi \choose \chi}\right]=(-1)^{x+y} \left(\frac{x}{3}\right)\cdot x-1,\nonumber \end{align} where $\chi$ is a character of order $3$ on $\mathbb{F}_p$ and $\phi$ is the quadratic character. \end{corollary} \begin{proof} As mentioned in Remark \ref{lem9}, $C_{3, -1}$ and $C_{3, \frac{1}{2}}$ are isomorphic over $\mathbb{Q}$ to the elliptic curve $y^2=x^3+1.$ By \cite[Proposition 2]{ono}, it is known that $a_p(C_{3, -1})=(-1)^{x+y-1}(\frac{x}{3})\cdot 2x$. From Corollary \ref{cor1}, we have \begin{align} -a_p(C_{3, \frac{1}{2}})=2+2p\cdot \text{Re}\left[\displaystyle {\chi \choose \chi}+ \displaystyle{\phi\chi \choose \chi}\right].\nonumber \end{align} Since $a_p(C_{3, -1})= a_p(C_{3, \frac{1}{2}})$, the result follows. \end{proof}
\section*{Acknowledgment} We thank Ken Ono for many helpful suggestions during the preparation of the article. We are grateful to the referee for his/her helpful comments.
\end{document} | arXiv |
Leonidas Alaoglu
Leonidas (Leon) Alaoglu (Greek: Λεωνίδας Αλάογλου; March 19, 1914 – August 1981) was a mathematician, known for his result, called Alaoglu's theorem on the weak-star compactness of the closed unit ball in the dual of a normed space, also known as the Banach–Alaoglu theorem.[1]
Leonidas Alaoglu
Born(1914-03-19)March 19, 1914
Red Deer, Alberta
DiedAugust 1981 (1981-09) (aged 67)
CitizenshipCanadian-American
EducationUniversity of Chicago
Known forAlaoglu's theorem
Scientific career
FieldsMathematics (Topology)
Institutions
• Pennsylvania State College
• Harvard University
• Purdue University
• United States Air Force
• Lockheed Martin
ThesisWeak topologies of Normed linear spaces (1938)
Doctoral advisorLawrence M. Graves
InfluencesNicolas Bourbaki
Life and work
Alaoglu was born in Red Deer, Alberta to Greek parents. He received his BS in 1936, Master's in 1937, and PhD in 1938 (at the age of 24), all from the University of Chicago. His thesis, written under the direction of Lawrence M. Graves was entitled Weak topologies of normed linear spaces. His doctoral thesis is the source of Alaoglu's theorem. The Bourbaki–Alaoglu theorem is a generalization of this result by Bourbaki to dual topologies.
After some years teaching at Pennsylvania State College, Harvard University and Purdue University, in 1944 he became an operations analyst for the United States Air Force. In his last position, from 1953 to 1981 he worked as a senior scientist in operations research at the Lockheed Corporation in Burbank, California. In this latter period he wrote numerous research reports, some of them classified.
During the Lockheed years he took an active part in seminars and other mathematical activities at Caltech, UCLA and USC. After his death in 1981 a Leonidas Alaoglu Memorial Lecture Series was established at Caltech.[2] Speakers have included Paul Erdős, Irving Kaplansky, Paul Halmos and Hugh Woodin.
See also
• Axiom of Choice – The Banach–Alaoglu theorem is not provable from ZF without use of the Axiom of Choice.
• Banach–Alaoglu theorem
• Gelfand representation
• List of functional analysis topics
• Superabundant number – Article explains the 1944 results of Alaoglu and Erdős on this topic
• Tychonoff's theorem
• Weak topology – Leads to the weak-star topology to which the Banach–Alaoglu theorem applies.
Publications
• Alaoglu, Leonidas (M.S. thesis, U. of Chicago, 1937). "The asymptotic Waring problem for fifth and sixth powers" (24 pages). Advisor: Leonard Eugene Dickson
• Alaoglu, Leonidas (Ph.D. thesis, U. of Chicago, 1938). "Weak topologies of normed linear spaces" Advisor: Lawrence Graves
• Alaoglu, Leonidas (1940). "Weak topologies of normed linear spaces". Annals of Mathematics. 41 (2): 252–267. doi:10.2307/1968829. JSTOR 1968829. MR 0001455.
• Alaoglu, Leonidas; J. H. Giese (1946). "Uniform isohedral tori". American Mathematical Monthly. 53 (1): 14–17. doi:10.2307/2306079. JSTOR 2306079. MR 0014230.
• Alaoglu, Leonidas; Paul Erdős (1944). "On highly composite and similar numbers" (PDF). Transactions of the American Mathematical Society. 56 (3): 448–469. doi:10.2307/1990319. JSTOR 1990319. MR 0011087.
• Alaoglu, Leonidas; Paul Erdős (1944). "A conjecture in elementary number theory". Bulletin of the American Mathematical Society. 50 (12): 881–882. doi:10.1090/S0002-9904-1944-08257-8. MR 0011086.
• Alaoglu, Leonidas; Garrett Birkhoff (1940). "General ergodic theorems". Annals of Mathematics. 41 (2): 252–267. doi:10.2307/1969004. JSTOR 1969004. MR 0002026. PMC 1077986. PMID 16588311.
References
1. American Men & Women of Science. 14th edition. New York: R.R. Bowker, 1979. There is no entry for him in the 15th or later editions
2. Niven, Ivan (1989), "The Threadbare Thirties", in Duren, Peter L.; et al. (eds.), A Century of Mathematics in America, American Mathematical Society, p. 219, ISBN 0821801244
• Mac Lane, Saunders (December 1996). "Letter to the editor" (PDF). Notices of the American Mathematical Society: 1469–1471.
External links
• Leonidas Alaoglu at the Mathematics Genealogy Project
Authority control
International
• ISNI
• VIAF
National
• Germany
• United States
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
Coverage versus response time objectives in ambulance location
Ľudmila Jánošíková ORCID: orcid.org/0000-0003-3983-67131,
Peter Jankovič1,
Marek Kvet1 &
Frederika Zajacová1
International Journal of Health Geographics volume 20, Article number: 32 (2021) Cite this article
This paper deals with the location of emergency medical stations where ambulances waiting to be dispatched are parked. The literature reports a lot of mathematical programming models used to optimize station locations. Most studies evaluate the models only analytically applying the same simplifying assumptions that were used in the modelling phase. In addition, they concentrate on systems operating one type of emergency units in homogeneous urban areas. The goal of our study is to identify which optimization criterion the emergency medical service (EMS) outcomes benefit from the most and which model should be used to design tiered systems in large urban–rural areas.
A bi-criteria mathematical programming model is proposed. The criteria include the accessibility of high-priority patients within a short time limit and average response time to all patients. This model is being compared to the p-median model with a single response time objective and to a hierarchical pq-median model that considers two different vehicle types. A detailed computer simulation model is used to evaluate the solutions. The methodology is verified in the conditions of the Slovak Republic using real historical data on 149,474 ambulance trips performed in 2015.
All mathematical models improve EMS performance by relocating some stations compared to the current distribution. The best results are achieved by the hierarchical median-type model. The average response time is reduced by 58 s, the number of calls responded to within 15 min is increased by 5% and the number of high-priority calls responded to within 8 min by 6%.
The EMS systems operating in heterogeneous areas should be designed to minimize response times, and not to maximize the number of calls served within a given time limit.
Emergency medical service (EMS) is an inseparable component of health care systems in many countries, from all income groups and regions of the world [1, 2]. Its main role is to provide first medical aid to patients in emergency situations. The organization of the EMS system substantially affects patients' chances of survival and recovery. Therefore planning EMS at all levels (strategic, tactical and operational) represents a challenging problem that is still topical in the constantly changing socio-economic environment.
In the past two decades, an increasing demand for EMS service worldwide has been reported. Population ageing has been identified as the key factor of this phenomenon [3, 4]. Elderly people suffer from chronic diseases and mental or physical dysfunctions. They are subject to the risk of sudden worsening of their medical conditions and injuries caused by falls. Also the risk of life-threatening emergency events, such as the stroke, severe respiratory difficulties, and cardiac arrest, increases with age. As a result, elderly people require EMS at a higher rate than younger people do. Although the elderly do not constitute a large part of the whole population, their share in EMS demand is significant. For example, Veser et al. [5] analyse the situation in Bavaria, which is the largest German federal state. In 2012 people aged 75 years and over constituted about 9% of the total population but accounted for 33% of all emergency cases. Lowthian et al. [3] state that in 2008 the proportion of Melbourne's population aged 85 years and over was 1.6% but the proportion of emergency transportations accounted for by this group was 13.6%.
The available census and EMS data show that the Slovak Republic follows this trend. The demographic trend elicits the need for changes in the EMS infrastructure so that the EMS system can operate better—save more lives, reduce permanent disablement, and improve the outcome of patients. The responsiveness of the system could be improved by better distribution of the stations so that they are closer to the locations where emergencies may occur. The discussion about the need for system reorganization due to demographic changes has started also in other countries, for example in Slovenia [4].
In this paper we focus on locating ambulance stations. The purpose of this work is to identify the best strategy for optimization of EMS infrastructure in a large-scale urban–rural area.
In the following literature review we focus on successful location models in EMS. Special attention is paid to the models dealing with different categories of patients and multi-objective models. Moreover, computer simulation in EMS infrastructure optimization is reviewed.
Optimization problems arising at emergency care pathway are surveyed in Aringhieri et al. [6]. Regarding EMS location problems, the authors focus on the models incorporating equity and uncertainty. Valuable for our research is especially Sect. 6 of the paper where the authors point out that simplifying assumptions are unavoidable in optimization and computer simulation can help to assess the performance of the planned system in practice.
A survey on recent research in healthcare facility location is supplied by Ahmadi-Javid et al. [7]. The study reveals that the maximal covering location problem (MCLP) is widely used to study location of emergency facilities. The problem allows for numerous variations and extensions, the most popular of which is the maximum expected coverage location problem (MEXCLP). The MEXCLP seeks to maximize the expected covered demand supposing an ambulance being busy with a certain probability and operating independently from other ambulances.
McLay [8] enhances the MEXCLP considering two different types of emergency vehicles and three patient classes. Calls are classified as Priority 1, 2, 3, where Priority 1 calls are life-threatening, Priority 2 calls may be life-threatening and Priority 3 calls are not life-threatening. The objective is to maximize the total number of expected Priority 1 calls responded to within a specified amount of time. The probabilities of vehicles being busy are the same for all candidate locations and are calculated by the hypercube queuing model. Knight et al. [9] deal with multiple classes of heterogeneous patients. Patients differ by medical conditions, so they have different urgency levels. The authors use the maximal expected survival location model with a different survival function for each patient class. The objective is to maximize the overall expected survival probability across all patient types. Leknes et al. [10] modify the maximal expected survival location model by Knight et al. [9]. The service time depends on the distance from a station to a demand zone, the distance from the scene to a hospital, the drop-off time and the probability of the transportation to a hospital. This way the model reflects the heterogeneity of the demand zones in the solved region. Three severity levels of calls are applied.
The models maximizing the overall expected survival probability across all patient types [9, 10] are in fact multi-objective models, where individual objectives for each patient class are combined into a single objective using the scalarization method. The main drawback of this method is how to set weights of individual objectives to make the model produce good results acceptable in practice. Another approach to cope with multiple objectives is goal programming. Alsalloum and Rand [11] optimize locations of a pre-defined number of ambulances. The objective function consists of two goals. The first is to maximize the expected coverage, and the second is to reduce the spare capacities of located ambulances. Goal programming requires a careful setting of objective targets, which is a difficult task especially if the model involves a large number of uncertain parameters. Inappropriate targets may lead to non-optimal solutions.
Aboueljinane et al. [12] supply an overview of the literature on simulation models applied to emergency medical service operations. The review covers the time period from 1969 to 2013. Computer simulation is identified as a useful tool for the analysis and improvement of EMS since it allows us to model the system in a high degree of detail that is not possible when using other methods such as mathematical programming or queuing theory. Most simulation studies support decisions on the base stations to open and the number of ambulances to assign to each open station. Aringhieri et al. [13] compare the current station locations in Milan (Italy) with locations proposed by the capacitated version of the location set covering model. Several scenarios with different ambulance speeds, the number of ambulances and dispatch protocols were evaluated by simulation. A trace-driven simulation approach was used, which means the model accepts a stream of actual call data as input. In contrast to self-driven simulation models, trace-driven simulation does not need the estimation of probability models describing the time and spatial distribution of calls and duration of service times. On the other hand, it has some shortcomings [14]: this approach requires a large amount of historical data; one has to handle erroneous records in the database of interventions; the existing data do not represent the future, so the simulation model cannot be used for mid-term and long-term planning when the demand volume will increase.
Zaffar et al. [15] recently evaluated three different deployment strategies by a trace-driven simulation model using Mecklenburg County (US) EMS data. The simulation model is not very realistic since it uses constant values for ambulances' speed, on-scene and drop-off times. The travel times are calculated using the Manhattan distance. Ünlüyurt and Tunçer [16] compare four coverage-based models for station location and ambulance allocation via discrete event simulation model. The experiments include a case study of Istanbul and randomly generated instances. Also this simulation model uses constant travel speed and drop-off times. Every patient is supposed to be transported to a hospital. Moreover, ambulances cannot be dispatched to another call while they return to their original station.
We conclude this literature review by considering a recursive optimization-simulation approach to the ambulance location and dispatching problem [17]. The method iterates through two steps. First, an optimal location of ambulances and dispatching strategy is proposed by mathematical programming using an initial estimation of ambulances' busy fraction. The model is a variant of the MEXCLP. Then the system with optimal infrastructure is assessed by computer simulation resulting in an updated busy fraction (equal for all ambulances) that inputs the mathematical model in the following iteration. The process is repeated until convergence is achieved. Convergence is measured by busy fraction and the location vector, respectively. The most inspiring issue for our research is the conclusion of the paper where Lanzarone et al. emphasise the necessity of using heterogeneous busy fractions especially in large case studies.
From the presented literature review one can make the following conclusions:
The research made so far has not answered the question which optimization criterion is the best proxy for health outcomes and which model should be used for designing EMS systems in a mixed urban–rural territory. Most studies do not compare different models mutually, and if they do, their comparison is based on the prescriptive model and suffers from the same simplifying assumptions that were used in the modelling phase. As Aringhieri et al. [6] emphasize, the best way of the assessment of the validity of alternative approaches is computer simulation. The simulation models published in the literature oversimplify the real operation due to the lack of operational data or for the sake of shorter computer processing time. Some common simplifications were mentioned with the references. However, location decisions are of strategic nature with long-term consequences and are associated with considerable investment costs. So, it is worth spending extra time on a careful assessment of the proposed infrastructural changes. In our opinion, the simulation model should be as realistic as possible. It should accurately capture all sub-processes of the service. The parameters of the model should be derived from real operation of the system. Of course, better model requires more computing time, but computing time does not matter in strategic planning, the outcome of the approach is more important. A similar conclusion is derived in [17].
The goal of our study is to identify the best suitable optimization criteria and corresponding mathematical programming model for designing an EMS infrastructure in a mixed urban–rural area. We do not consider investment costs associated with the redeployment of the stations. They are not extremely high because the ambulance can be housed in a fabricated building. Rather we use such optimization criteria that reflect the main goal of the EMS system—to save as many people as possible. Since this output cannot be measured when designing the system, surrogate optimization criteria are formulated instead. We concentrate on the most common criteria—response time and coverage. We aim at the validation of the modelling approaches with a detailed computer simulation model that precisely imitates the behaviour of all entities included in the system (patients, dispatchers, and ambulances).
Description of the region of interest
In the Slovak Republic, the EMS system is centralized and managed by the National Dispatch Center for EMS. The system consists of: (1) regional dispatch centres; their main role is to receive and evaluate emergency calls and dispatch appropriate rescue units; and (2) rescue units (i.e. ambulances staffed by rescue teams) that aim at providing adequate medical care to patients. The present organization of EMS in Slovakia was established by a series of laws in 2004. Thereafter, in 2010 the regulations of the Ministry of Health defined the amount and locations of ambulance base stations across the country. The intention for the distribution of the stations was to be able to reach 95% of patients within 15 min or less after the emergency call, regardless the patient's condition or the character of the area (urban or rural). According to the regulations, 273 stations are deployed in 211 towns and villages. Larger towns have multiple stations.
The regulations define just the town where a station should be, not its precise geographical location. A provider who gets the license to operate a given station chooses a suitable building, and so determines its address. The providers are public or private institutions. The jurisdictions of the providers are not restricted, they may operate across the whole country. The study in this paper is related to the positions of the stations in 2017, when EMS was provided by 12 agencies, Falck Záchranná, a.s. being the largest with 107 stations.
The Slovak system works in a Franco-German style, where the ambulance crew is qualified to provide on-site medical care. There are two types of ambulances. Most of them provide basic life support (BLS; Slovak abbreviation RZP) and have only a paramedic and a rescue driver on board. About one third of ambulances are well-equipped advanced life support units (ALS; Slovak abbreviation RLP). An ALS crew consists of an emergency physician, a paramedic and a driver. The staff is capable of performing additional life-saving procedures, e.g. inserting breathing tubes. The closest available ambulance to the emergency site is always dispatched regardless of its type. If it is a BLS ambulance and the incident is life-threatening, then the closest available ALS ambulance is dispatched concurrently. The rationale is that any medical treatment is better than waiting without a professional intervention for the arrival of a doctor. In 2017, total of 521,164 trips were performed by one or the other type of ambulance [18].
In this paper we focus on the relocation of the current stations. We do not want to change the number of stations because adding stations would be unacceptable due to economic reasons and closing some stations would worsen the accessibility of urgent health care. Our aim is to relocate some existing stations to other potential locations hoping that the new distribution will shorten response times.
Modelling demand
The first task in optimization of the station locations is to define the demand zones where potential patients live. We decided to identify the demand zones with the territorial units used in the census for two reasons. The first one is that we face an emergency system whose infrastructure is spread over a large-scale area (specifically, the whole state territory) populated by millions of people (population of Slovakia in 2020 was 5,459,781). Inhabitants, i.e. potential patients, have to be aggregated in a limited number of units, so that the resulting location model can be solved by common computational resources with limited memory and in an acceptable amount of processing time. The division of the country into smaller demand zones (e.g. by a rectangular grid) would result in an intractable location problem due to a huge volume of input data and an enormous number of variables. Thus our demand zones correspond to villages and towns. The two largest cities (the capital Bratislava with 440,948 inhabitants and Košice with 238,138 inhabitants) are administratively divided into boroughs (17 boroughs in Bratislava and 22 boroughs in Košice) that are regarded as separate demand zones.
The second reason for regarding census units as demand zones is the estimation of the number of calls arising in every demand zone. The demand in particular zones can be estimated in several ways: from real data on EMS calls [19, 20], from the population in the given demand zone [21, 22], or from EMS interventions per 1,000 population and population structure [23]. The first way is possible if EMS statistics for all demand zones under consideration are available. The second way is a rough estimation that need not correlate with a real number of patients, since the demand for EMS is influenced by the population's age structure that varies in a large-scale area, as we will demonstrate later on. The result is that the solution could not be optimal for real demand, and a so called surrogation error might arise [24]. Since historical data on ambulance interventions in every municipality were not available to us, we decided to predict EMS cases according to the third way, using a sample of patient data provided us by Falck Záchranná, a.s. and publicly available demographic data on population's size and age structure. This way of demand-modelling results in a more realistic solution.
Falck Záchranná a.s. supplied us with depersonalized data on 149,474 patients served in the year 2015. Therefore demographic data we used for demand estimation are also for 2015. The 2015 population data published by the Statistical Office of the Slovak Republic reveal that people aged 65 years and over constitute 14.45% of the population. However, the population's age structure is not homogenous throughout the state. To get a better idea about the age of people in different regions of Slovakia, we calculate an aging index for each territorial unit as the ratio of inhabitants who are at least 65 years old over inhabitants below the age of 65. The index varies a lot among municipalities (min = 0.012, max = 1.333, median = 0.177, mean = 0.189, sd = 0.083). At the district level the differences are not so conspicuous (min = 0.096, max = 0.263, median = 0.171, mean = 0.171, sd = 0.030) but their graphical presentation is more readable, and it illustrates the distribution of elderly people across the country (Fig. 1). The regions in the north with low index have a high birth rate. The highest index is in the central part of two largest cities Bratislava and Košice, where elderly people are in majority.
Spatial distribution of elderly people (Slovakia, 2015)
To calculate the share of elderly people in emergency dispatches, we use the Falck sample data. This dataset contains information about the time and date of each incident, the patient's age, the initial medical diagnosis, and time stamps of the whole EMS trip. The data suggest that patients aged 65 years and over required 42.34% of the interventions.
Combining Falck data with publicly available statistics reported by the National Dispatch Center [25] and the population statistics published by the Statistical Office of the Slovak Republic we can calculate the rates of emergency interventions for various age groups according to Eq. (1):
$${rate}_{k}=1000\frac{D\cdot {Falck}_{k}}{{Pop}_{k}\cdot {Falck}_{total}}$$
where ratek is the 1-year number of emergency cases per 1,000 persons in age group k, Falckk is the number of patients in age group k in the Falck dataset, Falcktotal is the total number of patients in the dataset, D is the total number of ambulance dispatches reported by the National Dispatch Center for the year 2015, and Popk is the number of inhabitants in age group k.
Within each age group we can further distinguish two groups of patients according to their initial medical diagnoses. The most severe diagnoses are denoted as the First Hour Quintet (FHQ), and they include: chest pain, severe trauma, stroke, severe respiratory difficulties, and cardiac arrest. Although the international definition of FHQ does not list unconsciousness, it is also a life-threatening condition. Therefore, after a consultation with emergency physicians, we decided to include it in FHQ. The FHQ conditions require immediate rescuing. If a call is recognized as a FHQ call, it gets the highest priority because every minute of delay in the response reduces patient's chance of survival. The FHQ patients account for 26.51% of all patients in the Falck dataset.
The analysis of EMS data reveals that the overall rates as well as FHQ rates increase with age (Fig. 2). The Spearman correlation is ρ = 0.95 for overall rate and ρ = 0.96 for FHQ rate. The dependency curve has an exponential shape, with the acceleration from the age of 65 years.
Emergency incident rates increase with age (Slovakia, 2015)
For the modelling purposes we will distinguish three age categories: (i) children in the age of 0–14 who have the lowest emergency incident rates (see Fig. 2), (ii) teens and nonelderly adults aged 15–64, and (iii) elderly people aged 65 years and over who call EMS the most frequently. The emergency incident rates for these categories are shown in Table 1. Based on the age structure and the rates we can estimate the annual number of EMS patients in municipality j according to Eq. (2):
$${b}_{j}=\sum _{k=1}^{3}{rate}_{k}{pop}_{kj}$$
where ratek is the 1-year number of emergency cases per 1,000 persons in age group k, and popkj is the number of inhabitants in age group k in municipality j. Similarly, the annual number of high-risk patients \({b}_{j}^{FHQ}\) can be calculated using the rates of FHQ incidents.
Table 1 Emergency incident rates
Modelling candidate locations and travel times
Candidate locations where stations can be placed are all municipalities and other villages that do not have local government but are the seats of stations today. There are 2,934 candidate locations in Slovakia and 2,928 of them are municipalities. The towns, boroughs and villages are represented by the nodes on the road network that are closest to the centre of the municipality. This way the calculation of travel times can be based on real network distances. The digital road network was downloaded from the OpenStreetMap database [26], which is a freely available source of geographical data. The travel times are related to deterministic speed of vehicles that depends on the quality of the road, its location inside or outside built-up area, the type of the movement (whether the ambulance drives at standard, or all possible speed with lights and sirens), and traffic volume that is higher in morning rush hours (from 6:30 to 9 am), as well as in evening rush hours (from 3 to 6 pm). The average ambulance speeds with regard to the road category and day time were derived from GPS records of ambulance trips by the Falck company (Tables 2 and 3) [21].
Table 2 Average speed in urban areas (kilometres per hour)
Table 3 Average speed in rural areas (kilometres per hour)
The bi-criteria mathematical programming model with coverage and response time objectives
The first model we propose is a bi-criteria model to maximize the expected coverage of high-priority FHQ patients and to minimize response time to all potential patients. The optimization procedure consists of the following steps:
Specify the stations that can be relocated.
Estimate the workload of ambulances using computer simulation.
Optimize the locations of the stations.
Specify geographic coordinates and assign ambulance types to the relocated stations.
If the distribution of the stations did not change, then stop. Otherwise, go back to step 2.
In the following text we describe the individual steps in detail.
To get a realistic solution, we do not allow all stations to change their current position. To decide which stations must remain where they are we apply two assumptions. Firstly, we suppose that ambulances in large towns are fully engaged. Therefore if the expected number of patients in a town exceeds the capacity of all ambulances currently stationed there, we do not allow them to change their positions. They are denoted as fixed and are not subject to the optimization. The demand volume in the corresponding municipalities is reduced by the total number of patients served by fixed stations. Secondly, we respect previous managerial decisions about multiple stations in a town where the estimated number of patients is less than the capacity of a station. There may be reasons that are not apparent to us but verified in practical operation. In such a case the model leaves one of the stations in the place and seeks better locations of the other stations. The preserved stations are not fully engaged by local residents in this case. Therefore the mathematical model fixes their positions but allows other municipalities to be assigned to their service area. The side effect of this pre-processing step is that the number of decision variables gets smaller and the complexity of the model is reduced.
A demand point is covered if an ambulance reaches it in a time standard. The desired service standard was set with regard to critical patients who are in life-threatening conditions and every minute delay in response time dramatically worsens their outcomes. These patients should be reached within 8 min, which is a widely accepted standard in most European countries for critical patients [27]. Thus assuming one minute pre-trip delay, we set the travelling time limit \({T}^{max}\) to the value of 7 min. Using this time standard we define the neighbourhood of a municipality. The neighbourhood consists of all candidate locations which are at most \({T}^{max}\) minutes far away.
To formalize the model, we introduce the following notation.
Sets and indices
I—set of candidate locations
I1—set of fixed candidate location, where the ambulances are not fully engaged
J—set of demand points (all municipalities)
i ∈ I—candidate location
j ∈ J—demand point
k—index corresponding to the number of stations
\({N}_{j}=\left\{i\in I:{t}_{ij}\le {T}^{max}\right\}\)—set of candidate locations in the neighbourhood of demand point j
p—number of stations to be sited
qj—probability of an ambulance in the neighbourhood of demand point j being unavailable
\({T}^{max}\)—the desired service standard; \({T}^{max}=7\)
bj—the annual number of EMS patients in municipality j reduced by the capacity of the fixed stations
\({b}_{j}^{FHQ}\)—the annual number of FHQ patients in municipality j
tij —shortest travel time from candidate location i to demand point j
sti—the number of fixed stations at candidate location i
\({n}_{j}=\left|{N}_{j}\right|\)—the number of candidate locations in the neighbourhood of demand point j.
Decision variables
$$x_{i}=\left\{ {\begin{aligned} {1,} &\quad{{\text{if a station is located at site }}i} \\ {0,} & \quad{{\text{otherwise}}} \\ \end{aligned} } \right.$$
$$y_{jk}=\left\{ {\begin{aligned} {1,} & \quad {{\text{if demand point }}j{\text{ is covered by at least }}k{\text{ stations}}} \\ {0,} & \quad {{\text{otherwise}}} \\ \end{aligned} } \right.$$
$$z_{{ij}}=\left\{ {\begin{aligned} {1,} & \quad {{\text{if demand point }}j{\text{ is served by the station located at site }}i} \\ {0,} &\quad {{\text{otherwise}}} \\ \end{aligned} } \right.$$
The following model is a mathematical programming formulation of the bi-criteria MEXCLP-pMP location model.
$$Maximize{\text{ }}f = \sum\limits_{{j \in J}} {\sum\limits_{{k = 1}}^{{n_{j} }} {b_{j}^{{FHQ}} } } \left( {1 - q_{j} } \right)q_{j}^{{k - 1}} y_{{jk}}$$
$$Minimize\,g = \sum\limits_{{i \in I}} {\sum\limits_{{j \in J}} {t_{{ij}} } } b_{j} z_{{ij}}$$
subject to
$$\sum\limits_{{i \in N_{j} }} {\left( {x_{i} + st_{i} } \right)} \ge \sum\limits_{{k = 1}}^{{n_{j} }} {y_{{jk}} } \quad for\,j\, \in \,J$$
$$\sum\limits_{{i \in I}} {z_{{ij}} } = 1 \quad for\, j \, \in \, J$$
$$z_{{ij}} \le x_{i} \quad for\,i\,\in I - I_1,\,j\,\in \,J$$
$$z_{{ij}} \le 1 \quad for\,i \in\,I_1,\,j\,\in\,J$$
$$\sum\limits_{{i \in I}} {\left( {x_{i} + st_{i} } \right)} = p$$
$$x_{i} \in \left\{ {{{0}},{{1}}} \right\}\,\quad for\,i\, \in \,I - I_1,\,j\, \in \,J$$
$$y_{{jk}} \in \left\{ {{{0}},{{1}}} \right\} \quad for\, j \in J,k = 1, \ldots ,n_{j}$$
$$z_{{ij}} \in \left\{ {{{0}},{{1}}} \right\} \quad for\,i\, \in \,I,\,j\,\in \,J$$
The objective function (3) maximizes the expected coverage of critical patients taking into account possible unavailability of ambulances. The term \({b}_{j}^{FHQ}\left(1-{q}_{j}\right){q}_{j}^{k-1}\) represents the increase in expected coverage of municipality j brought about by kth station. According to Eq. (5), sitting multiple stations in the neighbourhood of municipality j enables multiple variables yjk take the value of one and account for the increase in coverage. The objective function (4) minimizes the total travel time needed by the ambulances to reach all patients. The average travel time is equal to the total travel time divided by the number of all patients. Constraints (6) assign every municipality j to the service area of exactly one station i. Constraints (7) ensure that if a municipality j is assigned to a node i, then a station will be opened at the node i. Constraints (8) allow a municipality j to be served from a fixed station that is not fully engaged. Constraint (9) limits the number of located stations to their current amount. The obligatory constraints (10)–(12) specify the definition domains of the variables.
To solve the bi-criteria model, we used the lexicographic method. The method assumes a ranking of the objective functions according to their importance but in contrast to scalarizing and goal programming approaches, it does not require additional parameters. It is an iterative method. In the first step, the problem is optimized with the most important objective. If it has the only optimal solution, then this solution is also the best solution to the original multiple criteria problem and the method finishes. Otherwise, the problem with the second most important objective function is solved subject to a condition that the first objective function value will not worsen. The process repeats until a single optimal solution is found. In our problem, we consider the expected coverage of high-priority patients to be more important than the average response time. First, the single criterion model (3), (5), (9)–(11) is solved. The model maximizes the expected coverage of high-priority patients. Let us denote its optimal objective value as \({f}^{*}\). Then the weighted p-median problem (4), (6)–(12) with additional constraint (13) is solved.
$$\mathop \sum \limits_{{j \in J}} \mathop \sum \limits_{{k = 1}}^{{n_{j} }} b_{j}^{{FHQ}} \left( {1 - q_{j} } \right)q_{j}^{{k - 1}} y_{{jk}} \ge f^{*}$$
Constraint (13) assures that the expected coverage of most critical patients will not worsen when minimising the average response time for all patients.
The structure of the MEXCLP model (3), (5), (9)–(11) makes it easy to solve by a general-purpose solver. The weighted p-median problem (4), (6)–(10), (12), (13) is an NP-hard problem with a huge amount of variables that cannot be solved exactly. Instead, an approximation algorithm has to be used. We chose the kernel search method and adjusted it to our specific problem. Kernel search is a recently developed matheuristic that has been successfully applied for solving mixed integer linear problems (MILPs) with binary variables [28, 29]. In principle, it is a decomposition method that in sequence solves sub-problems of the original MILP problem. A sub-problem consists of a subset of decision variables. The subset contains only promising variables (a kernel) and a small subset of the remaining variables. The sub-problems are solved using a general-purpose MILP solver as a black-box. Thus the method benefits from the efficiency of the state-of-the-art solvers.
The solution of the model defines the municipalities where the stations will be deployed (at most one station in a municipality). This output is merged with the pre-processed fixed locations, resulting in multiple stations in more populated towns and boroughs. However, at this moment we do not have specific addresses, but multiple stations are regarded as located in the single (central) node of the municipality. The geographic positions of the stations inside a given municipality are determined afterwards, using a rule-of-thumb. We proceed from the existing locations. The addresses of fixed stations are preserved. A new station, if there is one, is placed at the municipality's central node on the road network. If one or more stations out of multiple existing stations are removed, they are selected randomly.
The model does not distinguish the types of emergency units. However, their distribution, especially the locations of ALS ambulances affect the efficiency of the system because an ALS ambulance is always dispatched to the high-priority call. We distribute ambulances among the optimized station locations a-posteriori in the following way: first, we retain the type of fixed stations that are disregarded in the optimization, and also the type of those stations whose positions were not changed by the optimization model. As regards the relocated stations, firstly we place ALS ambulances close to their current positions that are mainly in hospitals. The remaining stations are assigned by BLS ambulances.
The probability of an ambulance in the neighbourhood of a municipality being unavailable is an input parameter to the bi-criteria model. It is estimated using computer simulation of the EMS system. The probability is calculated as the average workload of potential ambulances in the neighbourhood. However, the workload depends on the distribution of the ambulances and therefore it is de facto the output of the model. Since we need it as input, it must be estimated a-priori. Initially, the workload is estimated using the current station location. If there is at least one ambulance currently operating in a candidate location, then the probability of this candidate is calculated as the average workload of currently operating ambulances. If the candidate does not have a station today, then its workload is set to the average workload of the stations that are in the 30 min neighbourhood of the candidate. The optimized distribution of the stations is submitted to simulation to obtain workload for the second run of the model. The process is repeated until convergence is achieved. Convergence is measured by ambulance distribution. When the locations in two successive solutions are (almost) identical, then the process stops.
The hierarchical model minimizing response time
To cope with the two-tiered EMS system that works in Slovakia and in many other countries, it is desirable to design an optimization model where different types of EMS units are taken into account. The EMS system with two vehicle types can be viewed as a hierarchical facility system. Using the classification by Şahin and Süral [30], we face a multi-flow, nested, and non-coherent system. If the objective is to minimize the total distance (or travel time, respectively) from demand zones to the closest ALS and BLS stations, then the hierarchical pq-median problem is to be solved. We propose a modification of the pq-median model by Serra and ReVelle [31]. Serra and ReVelle focus on coherent systems where all demand areas assigned to a lower level facility must be assigned to one and the same upper level facility. However, this condition does not hold in the EMS system we deal with. Thus we have amended their model for a non-coherent system. Moreover, we allow an upper level facility to be located only at a site where a lower level facility is opened.
In addition to location variables xi that decide on location of stations regardless of their type, we need another set of variables that model the decisions on locating only ALS stations:
$$u_{i}=\left\{ {\begin{aligned} {{{1,}}} &\quad{{\text{if an ALS station is located at site }}i} \\ {0,} & \quad {{\text{otherwise}}} \\ \end{aligned} } \right.$$
Service areas of the ALS stations are modelled using the following allocation variables:
$$v_{{ij}}=\left\{ {\begin{aligned} {1,} &\quad {{\text{if demand point }}j{\text{ is served by the ALS station located at site }}i} \\ {0,} &\quad {{\text{otherwise}}} \\ \end{aligned} } \right.$$
The lower level of the hierarchical pMP model consists of the objective function (4) and constraints (6)–(10) and (12). It decides on location of stations regardless of their type and creates their service areas. The upper level of the model decides which stations opened in the lower level will house ALS ambulances:
$$Minimize\sum _{i\in I}\sum _{j\in J}{t}_{ij}{b}_{j}{v}_{ij}$$
$$\sum\limits_{{i \in I}} {v_{{ij}} } = 1\, \quad for\, j \in J$$
$$v_{{ij}} \le u_{i} \, \quad for \,i \in I,\,j \in J$$
$$u_{i} \le st_{i} + x_{i} \,\quad for\,i\, \in I$$
$$\sum _{i\in I}{u}_{i}=r$$
$$u_{i} \in \left\{ {0,1} \right\}\,\quad for \,i \in I$$
$$v_{{ij}} \in \left\{ {{{0}},{{1}}} \right\}\,\quad for \,i \in I,\,j \in J$$
The objective function (14) minimizes the total travel time needed by the ALS ambulances to reach all patients. Constraints (15) assign every municipality j to the service area of exactly one ALS station i. Constraints (16) say that a municipality j can be assigned only to an open ALS station. Constraints (17) allow an ALS ambulance to be allocated only to a station opened at the lower level of hierarchy. Constraint (18) limits the number of located ALS stations to their current amount r. The remaining constraints (19) and (20) specify binary variables.
Both levels of the hierarchical model can be solve exactly using an efficient method by Janáček and Kvet [32]. The ALS ambulances will be allocated to those fixed or relocated stations, for which ui = 1 in the optimal solution. The remaining stations will house a BLS ambulance.
Computer simulation model
A detailed computer simulation model was developed [21]. Its purpose in this study is twofold: (i) to estimate the workload of ambulances as the input for the mathematical programming model, and (ii) to evaluate the performance of the EMS system with the infrastructure proposed by the model. The computer simulation models the reality on a less abstract level than a mathematical programming model does, therefore it provides us with better idea of the performance of the projected system. It calculates such quantitative indicators that cannot be derived from a mathematical programming model itself.
We implemented a self-driven, agent-based simulation model using AnyLogic simulation software. The model is developed on the Java simulation core. We implemented a library of classes and functions in Java for the simulation support.
The model was calibrated using the data sources as follow:
Publicly available statistics published by the National Dispatch Center;
The positions of the stations provided by the Ministry of Health;
A sample of patient data provided by Falck Záchranná a.s.;
LandScan data on population distribution;
OpenStreetMap data on the road network;
Historical data on the average ambulance speed with regard to the road category and day time provided by Falck Záchranná a.s.
The dataset obtained from Falck Záchranná a.s. allows us to extract important knowledge. First of all, the time distribution of calls can be revealed. With regard to the seasons and weekdays, we did not observe statistically significant differences in the number of calls. However, the call rates change significantly during a day. We can observe two peaks, one between 9 and 11 am and the other one between 5 and 9 pm. So the arrival of calls is modelled as a non-homogeneous Poisson process with the arrival rate varying depending on the time of day.
The spatial distribution of patients is modelled using the LandScan database [33]. LandScan data represent an ambient population (average presence of people over 24 h). A grid cell corresponds to an area of 30′′ × 30′′ (arc-seconds) in the WGS84 geographical coordinate system. The territory of the Slovak Republic is covered by 70,324 grid elements. The call that has been generated by the Poisson process is assigned to a grid cell with a probability that is proportional to its population. Inside the grid cell, the call is assigned randomly to a node on the road network.
The model captures all important processes presented in the management of emergency patients including precise modelling of the distribution of processing times.
The main features are the following:
As to demand modelling, we take into account three important characteristics: the arrival distribution, the geographical distribution and the priority of calls.
The model of the service time comprises all phases of the ambulance trip—the journey to a patient, treatment of the patient at the site of the incident, transportation to a hospital, drop-off time in the hospital, and the journey back to the base station.
The movement of an ambulance respects the underlying road network.
The on-scene time is modelled using a probability distribution that depends on the patient's diagnosis and crew's qualification.
The probability of the transportation of a patient to the hospital depends on the type of the intervening crew. The real data show that 77% of the patients treated by a paramedic team and 51% of the patients treated by a physician are transported to a hospital. If a patient has to be transported to a hospital, then the closest hospital complying with their condition and age is chosen (for example, there are hospitals specialized in cardiovascular diseases or children's hospitals).
In the hospital, the rescue team hand over the patient to the hospital staff, then they may spend some time cleaning and resupplying the vehicle. The time needed to perform these tasks is called drop-off time. The probability distribution of the drop-off time is modelled separately for every hospital. In most cases, the Erlang distribution fits well. The average drop-off time ranges from 7.1 to 36.2 min.
After leaving the hospital, the ambulance is available to respond to another call. It means that the ambulance can be dispatched to another call on its way home. The logic of ambulance dispatching approximates very well the rules adopted in practice. For example, an ambulance can be dispatched to another rescue while it is returning to its home station. In the simulation model it means that it is possible to change the destination of the ambulance while it is moving along the road.
Secondary transports are modelled as well, since they reduce the availability of the ambulances. A secondary transport is a planned activity where an ambulance does not respond to an emergency call but transports a patient or medical material between two hospitals.
These features represent a significant improvement in comparison to other simulation models reported in the literature.
The model was verified using the following techniques recommended also in [12]:
Animation to graphically visualize the movements of vehicles through the road network to check whether the rescue process and the chosen routes are as expected. During the rescue process, the colour of the vehicle changes to reflect its current state (movement to a patient, stay at the scene, transport of the patient to a hospital, return back to the base station).
Face validity by consulting EMS specialists who evaluated the model's conception and output behaviour compared to the real-world system.
Traces to track the movement of vehicles and occurrence of every event in the model (call arrival, vehicle assignment, destination hospital selection etc.) so as to validate the correctness of the model logic.
Sensitivity analysis by performing a comprehensive set of simulation experiments with different values of input parameters (e.g. arrival rate or hospitals with emergency departments) to determine if the model's output is as expected.
The output of the simulation model includes the following EMS performance indicators:
Average response time, since it has been monitored by the National Dispatch Center;
Percentage of calls responded to within 15 min, because a 15-min response time is regarded as standard in Slovakia;
Number of municipalities with the average response time longer than 15 min;
Average response time for the high-priority (FHQ) calls and the percentage of these calls responded to within 8 min;
Average ambulance workload and its variation.
The mathematical models were solved using the solver Gurobi Optimizer 8.1.1. The exact method [32] for the p-median problem is very efficient. The computing time was 580 s for the weighted p-median model and 607 s for the hierarchical pq-median model, respectively. The kernel search method for the MEXCLP-pMP model was implemented in Java language. A single run of the model for one workload setting took on average about 6 min. Altogether 4 iterations of the model were needed until convergence in the station distribution was achieved.
The simulation model, described above, was used to evaluate the current locations of emergency stations, as well as the optimized locations proposed by mathematical models.
The results of the simulation experiments are summarised in Table 4. The simulation experiment for one set of station locations consisted of 10 replications. One replication simulated 91 days of EMS performance. For response times, the mean values from 10 replications and 95% confidence intervals are reported. For coverage indicators, the mean values are given. The best values of the indicators are displayed in bold. Although the ambulance trip data and the population data are related to the year 2015, we validation of the model was performed in 2017 using the latest positions of the stations (5 stations shifted in the meantime). That is why we refer June 2017 as the current date.
Table 4 Performance indicators for the current and optimized locations
The computer simulation of the current (2017) system revealed that the system is short of the target to reach 95% of patients within 15 min. The real accessibility within this time limit is only 75.26%. 868 municipalities (almost 30%) have the average response time longer than 15 min (Fig. 3). The Slovak system also exhibits poor performance regarding the 8-min response-time standard for the high-priority calls. Only 38.84% of critical patients are reached within 8 min, which is far less than the EU average of 66.9% [27]. The average ambulance workload is 31.98%, which corresponds to other EMS systems worldwide where ambulances are typically busy at least 30% of the time [19].
Municipalities with the average response time longer than 15 min, current station location
From the rest of the table we can observe that the reorganization of the system has a positive effect on the performance. Both MEXCLP-pMP and pMP models relocate approximately 78% of the stations (150 and 151, respectively). Regardless of the ambulance allocation, the mathematical programming models reduce significantly the overall average response time, as well as response time for the most critical patients (their confidence intervals do not overlap with the confidence intervals for status quo). In parallel with reducing the response time, the accessibility within a given time threshold is increasing.
As regards the two policies of allocation of ALS ambulances, the hierarchical model that incorporates the decisions on particular ambulance types achieves better results than the models where the type of the stations is defined in a post-optimization process. The most important improvement is in the accessibility of the critical patients. In comparison to the current state, the average response time of them is reduced by 56 s. It may seem that one minute is not too much, but one has to realize that for a person who is in a life-threatening condition, such as a cardiac arrest, the line between life and death is very thin, and every second matters. Cardiac arrest and unconsciousness are the most frequent diagnoses of those patients who die before or during the rescue operation. From the Falck sample data on these patients we can derive the survival probability as a function of response time t. The survival probability function is as follows [21]:
$$s\left( t \right) = \frac{1}{{1 + \exp \left( { - 2.04492 + 0.045427t} \right)}}$$
From the sample data and the total number of patients reported by the National Dispatch Center we can estimate that in 2019 there were 26,003 most-critical patients in Slovakia. The reduction of the average response time by 56 s means that the survival probability increases by 0.61%. As a result, by 142 more patients could be saved. We think this is a significant improvement since every life matters.
Regardless of the model and ALS allocation policy, relocating the stations improves the accessibility mainly in the densely populated western part of the country. The most successful hierarchical pMP model reduces the overall number of municipalities with the average response time greater than 15 min by 267 (31%) (Fig. 4). The model also generates the smallest ambulance workload and thus increases the probability that the closest ambulance will be available when needed. At the same time, ambulance workload is distributed more evenly (coefficient of variation is less than at present).
Municipalities with the average response time longer than 15 min after optimization by the hierarchical pMP model
Our study is the first to compare different objectives for location of EMS stations in a large urban–rural area. This way it fills the gap in ambulance location literature since the research so far has been concentrating on urban rather than rural or mixed areas [34]. Modelling the EMS system for a heterogeneous urban–rural area is more challenging than it is for a homogeneous region. There may be different time standards for densely and rarely populated areas, there are big differences in ambulance workload, different road network quality, traffic volume, and the distance to the nearest hospital. Therefore the results of urban-oriented studies cannot be directly applied to a region with diverse topography and demography.
We concentrate on two objectives that are supposed to mostly influence the outcomes of emergency medical services, particularly the maximum coverage and the minimum average response time. Previous studies [15, 35, 36] suggested that the maximum coverage objective itself does not perform well. The response time related objectives such as the average response time [35, 36] and maximum survivability [15], result in a better system performance than the maximum coverage objective. Although these outcomes were derived for metropolitan areas, we considered them to be a good starting point for our research. Another viewpoint is that the coverage ensures equity among the patients, at least to some extent. That is why we have developed a multi-criteria model that combines the maximum coverage of high-risk patients with the minimum average response time to all patients. This model resolves the deployment of ambulance stations. Ambulances are assigned to the open stations under a heuristic rule. The other model we have proposed is a hierarchical model that minimizes the average response time to all patients with both types of emergency units. Finally, the simulation study has revealed that the model with the coverage objective improves the existing system but it is outperformed by models that rely on the response time only. Our findings are in compliance with Felder end Brinkmann [37] who give theoretical evidence that an equal access approach to the EMS provision does not maximize the number of lives saved.
A reason for poorer performance of the coverage objective may be in the imprecise estimation of busy fractions of ambulances. Busy fractions of the candidate locations that do not have a station today are set at the average value of the stations in the neighbourhood. It might be a value too optimistic for many candidate locations. Covering models in general do not differentiate different locations within the same response time threshold. Therefore, what may happen is that the model decides to place some stations in small villages near large towns. From the point of view of the covering objective, all patients in the town are considered perfectly satisfied with the service. However, the real workload of ambulances would be enormous. For example, let the village, whose demand is 1, be 5 min away from the town with the call volume of 100. If the ambulance was located in the village, travelling to the scene would add 500 min to its workload compared to 5 min, if the station was located in the town. So we do not recommend using coverage criteria for a large-scale region.
In our previous studies we experimented with several other models using response time objectives. The maximum response time was examined in [21]. The capacitated version of the p-median problem was investigated in [29]. Here the average response time was minimized, provided the number of calls an ambulance could serve was restricted. None of these models outperformed the hierarchical pq-median model.
The important part of the overall emergency call-to-care interval is the patient access time interval (PATI) measured from EMS vehicle arrival at the incident site to the time EMS personnel contact the patient. This time interval accounts for 10 to 44% of the overall emergency call-to-care interval [38]. Although reducing PATI may improve patient outcomes, it is not affected by the locations of ambulance base stations. Rather community- specific strategies are to be developed to overcome the patient-access barriers. In our study, PATI is modelled in computer simulation as a part of the on-scene time.
The results of our study can be applied in all countries with a tiered EMS system that utilizes different types of emergency units, dispatching ALS units to the most severe events and using BLS units for non-urgent and scheduled transport of stable patients. Tiered systems apply the Franco-German model of EMS delivery where the crew is qualified to treat patients in their homes or at the scene. Such systems are common in many European countries such as Germany, France, Greece, Austria, Czech Republic, Hungary, and Poland [39,40,41].
If the results of our study were to be used to reorganize an existing system, we recommend to assess the current system thoroughly. None of the EMS systems in the countries mentioned above is a greenfield project. There may be a lot of decisions and measurements that already work effectively, and the reorganization should respect them. Here we have in mind especially the decisions on which stations should remain in their current positions. Another reason for this pre-processing step is that the p-median model does not allow multiple stations to be opened at one site. This is the main drawback of the proposed approach.
Another limitation of our approach is the absence of investment costs connected with the relocation of the stations. A limited budget is always to be taken into account in real life. Together with the resistance of professionals and public to changes, it may lead to a limited number of relocated stations. Nevertheless, our models are able to cope with such a restriction via p and r parameters.
Furthermore, we would like to emphasize the necessity of demands being modelled carefully. The changes we propose are based on the current demand distribution. Even though the impact of the age structure has been considered, we do encourage a serious analysis of the demography and morbidity trends in the given region to be conducted.
Finally, we recommend the usage of computer simulation as a validation tool. The simulation model itself is not able to suggest the best station locations, however, it is useful in evaluating various scenarios that include not only the number and distribution of the EMS stations but also such factors as the types of ambulances, destination hospitals, or dispatching policies. To get a credible output, the model must capture all processes on the emergency care pathway including reliable distributions of processing times. In the future, we will elaborate demographic prognoses for particular regions of the country and incorporate them into the model. Then the simulation will allow us to predict the future performance of the EMS system, and to identify the resources necessary for ensuring a satisfactory quality of emergency care.
In this paper we present the utilization of different operations research techniques to support the decision making process regarding placing the EMS stations over a large urban–rural area. A bi-criteria mathematical programming model is proposed. The criteria include the coverage of high-priority patients and response time in relation to all patients. The model is compared to the p-median model with a single response time objective and to a hierarchical pq-median model that involves two types of emergency units. The following conclusions can be derived from our empirical study:
All mathematical models make EMS performance better than the current status is by relocating some stations.
The minimum average response time objective produces better results than the maximum coverage objective.
ALS:
Advanced life support
BLS:
EMS:
FHQ:
First hour quintet
MEXCLP:
Maximum expected coverage location problem
MILP:
Mixed integer linear problem
MLCP:
Maximal covering location problem
PATI:
Patient access time interval
pMP:
p-Median problem
Emergency medical services systems in the European Union. World Health Organization, 2008. https://www.euro.who.int/_data/assets/pdf_file/0016/114406/E92038.pdf. Accessed 28 May 2021.
Kobusingye OC, Hyder AA, Bishai D, Joshipura M, Hicks ER, Mock C, et al. Emergency medical services. In: Jamison DT, Breman JG, Measham AR, Alleyne G, Claeson M, Evans DB, et al., editors. Disease control priorities in developing countries. Washington (DC): The International Bank for Reconstruction and Development / The World Bank; 2006. p. 1261–79.
Lowthian JA, Cameron PA, Stoelwinder JU, Curtis A, Currell A, Cooke MW, et al. Increasing utilisation of emergency ambulances. Australian Health Rev. 2011;35:63–9. https://doi.org/10.1071/AH09866.
Kitić Jaklič T, Kovač J. The impact of demographic changes on the organization of emergency medical services: the case of Slovenia. Organizacija. 2015;48(4):247–59. https://doi.org/10.1515/orga-2015-0021.
Veser A, Sieber F, Groß S, Prückner S. The demographic impact on the demand for emergency medical services in the urban and rural regions of Bavaria, 2012–2032. J Public Health. 2015;23:181–8. https://doi.org/10.1007/s10389-015-0675-6.
Aringhieri R, Bruni ME, Khodaparasti S, van Essen JT. Emergency medical services and beyond: Addressing new challenges through a wide literature review. Comput Oper Res. 2017;78:349–68. https://doi.org/10.1016/j.cor.2016.09.016.
Ahmadi-Javid A, Seyedi P, Syam SS. A survey of healthcare facility location. Comput Oper Res. 2017;79:223–63. https://doi.org/10.1016/j.cor.2016.05.018.
McLay LA. A maximum expected covering location model with two types of servers. IIE Trans. 2009;41:730–41. https://doi.org/10.1080/07408170802702138.
Knight VA, Harper PR, Smith L. Ambulance allocation for maximal survival with heterogeneous outcome measures. Omega. 2012;40(6):918–26. https://doi.org/10.1016/j.omega.2012.02.003.
Leknes H, Aartun ES, Andersson H, Christiansen M, Granberg TA. Strategic ambulance location for heterogeneous regions. Eur J Oper Res. 2017;260(1):122–33. https://doi.org/10.1016/j.ejor.2016.12.020.
Alsalloum OI, Rand GK. Extensions to emergency vehicle location models. Comput Oper Res. 2006;33(9):2725–43. https://doi.org/10.1016/j.cor.2005.02.025.
Aboueljinane L, Sahin E, Jemai Z. A review on simulation models applied to emergency medical service operations. Comput Ind Eng. 2013;66:734–50. https://doi.org/10.1016/j.cie.2013.09.017.
Aringhieri R, Carello G, Morale D. Supporting decision making to improve the performance of an Italian Emergency Medical Service. Ann Oper Res. 2016;236:131–48. https://doi.org/10.1007/s10479-013-1487-0.
Henderson SG, Mason AJ. Ambulance service planning: simulation and data visualisation. In: Brandeau ML, Sainfort F, Pierskalla WP, editors. Operations research and health care. Boston: Kluwer Academic Publishers; 2005. p. 77–102.
Zaffar MA, Rajagopalan HK, Saydam C, Mayorga M, Sharer E. Coverage, survivability or response time: A comparative study of performance statistics used in ambulance location models via simulation-optimization. Oper Res Health Care. 2016;11:1–12. https://doi.org/10.1016/j.orhc.2016.08.001.
Ünlüyurt T, Tunçer Y. Estimating the performance of emergency medical service location models via discrete event simulation. Comp Ind Eng. 2016;102:467–75. https://doi.org/10.1016/j.cie.2016.03.029.
Lanzarone E, Galluccio E, Bélanger V, Nicoletta V, Ruiz A. A recursive optimization-simulation approach for the ambulance location and dispatching problem. Procedings of the 2018 Winter Simulation Conference; p. 2530–2541. https://doi.org/10.1109/WSC.2018.8632522
Annual report of the National Dispatch Center for EMS for the year 2019 (Výročná správa Operačného strediska záchrannej zdravotnej služby Slovenskej republiky za rok 2019) [in Slovak]. https://www.155.sk/subory/dokumenty/vyrocne_spravy/Vyrocna_sprava_OSZZSSR_2019.pdf. Accessed 20 July 2020.
Erkut E, Ingolfsson A, Erdogan G. Ambulance location for maximum survival. Nav Res Logist. 2008;55(1):42–58. https://doi.org/10.1002/nav.20267.
McLay LA, Mayorga ME. Evaluating emergency medical service performance measures. Health Care Manag Sci. 2010;13:124–36. https://doi.org/10.1007/s10729-009-9115-x.
Jánošíková Ľ, Kvet M, Jankovič P, Gábrišová L. An optimization and simulation approach to emergency stations relocation. Central Eur J Oper Res. 2019;27(3):737–58. https://doi.org/10.1007/s10100-019-00612-5.
Schmid V, Doerner KF. Ambulance location and relocation problems with time-dependent travel times. Eur J Oper Res. 2010;207:1293–303. https://doi.org/10.1016/j.ejor.2010.06.033.
Sasaki S, Comber AJ, Suzuki H, Brunsdon C. Using genetic algorithms to optimise current and future health planning – the example of ambulance location. Int J Health Geogr. 2010;9:4. https://doi.org/10.1186/1476-072X-9-4.
Hodgson MJ. Data surrogation error in p-median models. Ann Oper Res. 2002;110:153–65. https://doi.org/10.1023/A:1020771702141.
Annual report of the National Dispatch Center for EMS for the year 2015 (Výročná správa Operačného strediska záchrannej zdravotnej služby Slovenskej republiky za rok 2015) [in Slovak]. https://www.155.sk/subory/dokumenty/vyrocne_spravy/Vyrocna_sprava_OSZZSSR_2015.pdf. Accessed 30 Jan 2017.
OpenStreetMap database. https://www.openstreetmap.org. Accessed 16 Apr 2019.
Krafft T, García-Castrillo Riesgo L, Fischer M, Lippert F, Overton J, Robertson-Steel I. Health Monitoring & Benchmarking of European EMS Systems: Components, Indicators, Recommendations. Project Report to Grant Agreement NO. SPC.2002299 under the European Community Health Monitoring Programme 1997–2002. Köln: European Emergency Data (EED) Project; 2006.
Guastaroba G, Savelsbergh M, Speranza MG. Adaptive Kernel Search: a heuristic for solving Mixed Integer linear Programs. Eur J Oper Res. 2017;263(3):789–804. https://doi.org/10.1016/j.ejor.2017.06.005.
Jánošíková Ľ, Jankovič P. Emergency medical system design using kernel search. In Proceedings of the 2018 IEEE Workshop on Complexity in Engineering (COMPENG). Firenze, Italy, October 10–12, 2018. https://doi.org/10.1109/CompEng.2018.8536240.
Şahin G, Süral H. A review of hierarchical facility location models. Comput Oper Res. 2007;34:2310–31. https://doi.org/10.1016/j.cor.2005.09.005.
Serra D, ReVelle C. The pq-median problem: Location and districting of hierarchical facilities. Location Sci. 1993;1(4):299–312.
Janáček J, Kvet M. Sequential approximate approach to the p-median problem. Comp Ind Eng. 2016;94:83–92. https://doi.org/10.1016/j.cie.2016.02.004.
LandScan database. https://landscan.ornl.gov. Accessed 15 Oct 2017.
Olave-Rojas D, Nickel S. Modeling a pre-hospital emergency medical service using hybrid simulation and a machine learning approach. Simul Model Pract Theory. 2021;109: 102302. https://doi.org/10.1016/j.simpat.2021.102302.
Toro-Díaz H, Mayorga ME, McLay LA, Rajagopalan HK, Saydam C. Reducing disparities in large-scale emergency medical service systems. J Oper Res Soc. 2015;66:1169–81. https://doi.org/10.1057/jors.2014.83.
Díaz-Ramírez J, Granda E, Villarreal B, Frutos G. A Comparison of ambulance location models in two Mexican cases. In Proceedings of the International Conference on Industrial Engineering and Operations Management. Paris, France, July 26–27, 2018.
Felder S, Brinkmann H. Spatial allocation of emergency medical services: minimising the death rate or providing equal access? Reg Sci Urban Econ. 2002;32:27–45. https://doi.org/10.1016/S0166-0462(01)00074-6.
Sinden S, Heidet M, Scheuermeyer F, Kawano T, Helmer JS, Christenson J, et al. The association of scene-access delay and survival with favourable neurological status in patients with out-of-hospital cardiac arrest. Resuscitation. 2020;155:211–8. https://doi.org/10.1016/j.resuscitation.2020.05.047.
Roudsari BS, Nathens AB, Arreola-Risa C, Cameron P, Civil I, Grigoriou G, et al. Emergency Medical Service (EMS) systems in developed and developing countries. Injury Int J Care Injured. 2007;38:1001–13. https://doi.org/10.1016/j.injury.2007.04.008.
Gondocs Z, Olah A, Marton-Simora J, Nagy G, Schaefer J, Betlehem J. Prehospital emergency care in Hungary: What can we learn from the past? J Emerg Med. 2010;39(4):512–8. https://doi.org/10.1016/j.jemermed.2009.09.029.
Sagan A, Kowalska-Bobko I, Mokrzycka A. The 2015 emergency care reform in Poland: some improvements, some unmet demands and some looming conflicts. Health Policy. 2016;120:1220–5. https://doi.org/10.1016/j.healthpol.2016.09.009.
This research was supported by the Slovak Research and Development Agency under the project APVV-19-0441 "Allocation of limited resources to public service systems with conflicting quality criteria" and by the Scientific Grant Agency of the Ministry of Education of the Slovak Republic and the Slovak Academy of Sciences under the project VEGA 1/0216/21 "Designing of emergency systems with conflicting criteria using tools of artificial intelligence".
Faculty of Management Science and Informatics, University of Žilina, Univerzitná 1, 010 26, Žilina, Slovak Republic
Ľudmila Jánošíková, Peter Jankovič, Marek Kvet & Frederika Zajacová
Ľudmila Jánošíková
Peter Jankovič
Marek Kvet
Frederika Zajacová
LJ: Conception, design of the study, methodology, data analysis, drafting the article. PJ: Data analysis, simulation model, drafting the article. MK: Data analysis, mathematical model, revising the article. FZ: Data analysis, mathematical model, revising the article. All authors read and approved the final manuscript.
Correspondence to Ľudmila Jánošíková.
Jánošíková, Ľ., Jankovič, P., Kvet, M. et al. Coverage versus response time objectives in ambulance location. Int J Health Geogr 20, 32 (2021). https://doi.org/10.1186/s12942-021-00285-x
Ambulance location | CommonCrawl |
\begin{document}
\title{Planar order on vertex poset}
\author[a,b]{Xuexing Lu}
\affil[a]{\small School of Mathematical Sciences, University of Science and Technology of China, Hefei, China} \affil[b]{Wu Wen-Tsun Key Laboratory of Mathematics, Chinese Academy of Sciences, Hefei, China}
\renewcommand\Authands{ and } \maketitle
\begin{abstract} A planar order is a special linear extension of the edge poset (partially ordered set) of a processive plane graph. The definition of a planar order makes sense for any finite poset and is equivalent to the one of a conjugate order. Here it was proved that there is a planar order on the vertex poset of a processive planar graph naturally induced from the planar order of its edge poset. \end{abstract}
\text{\textit{Keywords}: edge poset, vertex poset, planar order}\\
\section{Introduction} The notion of a processive plane graph, a special case of Joyal and Street's progressive plane graph \cite{[JS91]}, was introduce in \cite{[HLY16]} as a graphical tool for tensor calculus in semi-groupal categories. In \cite{[HLY16]}, we gave a totally combinatorial characterization of an equivalence class of processive graphs in terms of the notions of a \textbf{POP-graph} which is a \textbf{processive graph} (a special kind of acyclic directed graph) equipped with a \textbf{planar order} (a special linear order of the edges).
However, it turns out that the notion of a planar order can be defined for a general finite poset (partially ordered set) and essentially equivalent to the one of a conjugate order \cite{[FM96]}, which is an important notion in the study of planar posets. So this raises an interesting question: for a processive graph, are there some relations between planar orders on its edges and planar orders on its vertices? In this paper, we will give a positive answer to this question by showing that any planar order of edges of a processive graph naturally induces a planar order of vertices. \section{Processive plane graph} \begin{defn} A \textbf{processive plane graph} is an acyclic directed graph drawn in a plane box with the properties that: $(1)$ all edges monotonically decrease in the vertical direction; $(2)$ all sources and sinks are of degree one; and $(3)$ all sources and sinks are placed on the horizontal boundaries of the plane box. \end{defn} Figure $1$ shows an example.
\begin{center} \begin{tikzpicture}[scale=0.35] \node (v2) at (-4,3) {}; \draw[fill] (-1.5,5.5) circle [radius=0.15]; \node (v1) at (-1.5,5.5) {}; \node (v7) at (-1.5,1) {}; \node (v9) at (1.5,5.5) {}; \node (v14) at (2,1.5) {}; \node (v3) at (-3,7.5) {}; \node (v4) at (-2,7.5) {}; \node (v5) at (-0.5,7.5) {}; \node (v6) at (-4.8,7.5) {}; \node (v11) at (-4.5,-1) {}; \node (v12) at (-2,-1) {}; \node (v13) at (0,-1) {}; \node (v15) at (2,-1) {}; \node (v8) at (1,7.5) {}; \node (v10) at (2.5,7.5) {}; \node at (-2.5,3.5) {}; \node at (-3,5.2) {}; \node at (-1.2,3.3) {}; \node at (0.5,3.25) {}; \node at (2.2,3.7) {}; \node at (-3,1.7) {}; \draw[fill] (-4,3) circle [radius=0.15]; \draw[fill] (v1) circle [radius=0.15]; \draw[fill] (v7) circle [radius=0.15]; \draw[fill] (v9) circle [radius=0.15]; \draw[fill] (v14) circle [radius=0.15]; \draw[fill] (v1) circle [radius=0.15]; \draw[fill] (v2) circle [radius=0.15]; \draw[fill] (v3) circle [radius=0.15]; \draw[fill] (v4) circle [radius=0.15]; \draw[fill] (v5) circle [radius=0.15]; \draw[fill] (v6) circle [radius=0.15]; \draw[fill] (v8) circle [radius=0.15]; \draw[fill] (v10) circle [radius=0.15]; \draw[fill] (v11) circle [radius=0.15]; \draw[fill] (v12) circle [radius=0.15]; \draw[fill] (v13) circle [radius=0.15]; \draw[fill] (v15) circle [radius=0.15];
\draw plot[smooth, tension=1] coordinates {(v1) (-2.5,5) (-3.5,4) (v2)}[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw plot[smooth, tension=1] coordinates {(v1) (-2,4.5) (-3,3.5) (v2)}[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}];
\draw (-3,7.5) -- (-1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-2,7.5) -- (-1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-0.5,7.5) -- (-1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}];
\draw (-4.8,7.5)-- (-4,3)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-1.5,5.5) -- (-1.5,1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-4,3) -- (-1.5,1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}];
\draw (1,7.5)--(1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (2.5,7.5) -- (1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (1.5,5.5) -- (-1.5,1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-4,3) -- (-4.5,-1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-1.5,1) -- (-2,-1)[postaction={decorate, decoration={markings,mark=at position .65 with {\arrow[black]{stealth}}}}]; \draw (0,-1) -- (-1.5,1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrowreversed[black]{stealth}}}}]; \draw (1.5,5.5) -- (2,1.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (2,1.5) -- (2,-1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}];
\node (v17) at (6.5,7.5) {}; \node (v16) at (6.5,-1) {}; \draw[fill] (v16) circle [radius=0.15]; \draw[fill] (v17) circle [radius=0.15]; \draw (6.5,-1) -- (6.5,7.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrowreversed[black]{stealth}}}}]; \node (v18) at (4.5,7.5) {}; \node (v19) at (4.5,-1) {}; \node (v20) at (4.5,3.25) {}; \draw[fill] (v18) circle [radius=0.15]; \draw[fill] (v19) circle [radius=0.15];
\draw[fill] (v20) circle [radius=0.15]; \draw (4.5,-1) -- (4.5,3.25)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrowreversed[black]{stealth}}}}]; \draw (4.5,3.24) -- (4.5,7.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrowreversed[black]{stealth}}}}];
\draw [loosely dashed] (-6.5,7.5)--(-6.5,-1); \draw [loosely dashed] (8,7.5)--(8,-1); \draw [loosely dashed] (-6.5,7.5)--(8,7.5); \draw [loosely dashed] (-6.5,-1)--(8,-1);
\end{tikzpicture}
Figure $1$. A processive plane graph \end{center}
Processive plane graphs can also be defined in terms of \textbf{processive graphs} \cite{[HLY16]} and their \textbf{boxed planar drawings}. \begin{defn} A processive graph is an acyclic directed graph with all its sinks and sources of degree one. \end{defn} A planar drawing of processive graph $G$ is called \textbf{boxed} \cite{[JS91]} if $G$ is drawn in a plane box with all sinks of $G$ on one horizontal boundary of the plane box and all sources of $G$ on the other horizontal boundary of the plane box. A planar drawing of an acyclic directed graph is called \textbf{upward} if all edges increases monotonically in the vertical direction (or other fixed direction). Thus a processive plane graph is exactly a boxed and upward planar drawing of a processive graph. \begin{defn} Two processive plane graphs are \textbf{equivalent} if they are connected by a planar isotopy such that each intermediate planar drawing is boxed (not necessarily upward). \end{defn} In \cite{[HLY16]}, equivalence classes of processive plane graphs are mainly used to construct a free strict tensor category on a semi-tensor scheme. \section{Planar order and POP-graph}
In \cite{[HLY16]}, we gave a combinatorial characterization of an equivalence classes of a processive plane graph in terms of a planar order on its underlying processive graph. In this paper, we define planar order for any poset.
\begin{defn}
A \textbf{planar order} on a poset $(X,\rightarrow)$ is a linear order $\prec$ on $X$, such that
$(P_1)$ for any $x_1,x_2\in X$, $x_1\rightarrow x_2$ implies $x_1\prec x_2$;
$(P_2)$ for any $x_1,x_2,x_3\in X$, $x_1\prec x_2\prec x_3$ and $x_1\rightarrow x_3$ imply that either $x_1\rightarrow x_2$ or $x_2\rightarrow x_3$.
\end{defn} $(P_1)$ says that $\prec$ is a linear extension of $\rightarrow$.
Recall that two partial orders on a set are \textbf{conjugate} if each pair of elements are comparable by exactly one of them. It is easy to see that $(P_2)$ is equivalent to the condition that if $e_1\prec e_2 \prec e_3$, then $e_1\not\rightarrow e_2$ and $e_2\not\rightarrow e_3$ imply that $e_1\not\rightarrow e_3$. Thus $(P_2)$ enables us to define a transitive binary relation: $e_1<e_2$ if and only if $e_1\prec e_2$ and $e_1\not\rightarrow e_2$; moreover, if $(P_1)$ is satisfied, then the linearity of $\prec$ implies that $<$ is a conjugate order of $\rightarrow$. So the planar order $\prec$ is a reformulation of the conjugate order of $\rightarrow$.
In a directed graph, we denote $e_1\rightarrow e_2$ if there is a directed path starting from edge $e_1$ and ending with edge $e_2$. Similarly, $v_1\rightarrow v_2$ denotes that there is a directed path starting from vertex $v_1$ and ending with vertex $v_2$. For any acyclic directed graph, its edge set and vertex set are posets with the relation $e_1\rightarrow e_2$ and $v_1\rightarrow v_2$. We call them \textbf{edge poset} and \textbf{vertex poset} of the acyclic directed graph, respectively.
The following is a key notion in \cite{[HLY16]}.
\begin{defn} A \textbf{planarly ordered processive graph} or \textbf{POP-graph} , is a processive graph $G$ together with a planar order $\prec$ on its edge poset $(E(G),\rightarrow)$. \end{defn} We simply denote a POP-graph as $(G,\prec)$; see Fig $2$ for an example. \begin{center}
\begin{tikzpicture}[scale=0.4] \node (v2) at (-4,3) {}; \draw[fill] (-1.5,5.5) circle [radius=0.11]; \node (v1) at (-1.5,5.5) {}; \node (v7) at (-1.5,1) {}; \node (v9) at (1.5,5.5) {}; \node (v14) at (2,1.5) {}; \node (v3) at (-3,7.5) {}; \node (v4) at (-2,7.5) {}; \node (v5) at (-0.5,7.5) {}; \node (v6) at (-4.8,7.5) {}; \node (v11) at (-4.5,-1) {}; \node (v12) at (-2,-1) {}; \node (v13) at (0,-1) {}; \node (v15) at (2,-1) {}; \node (v8) at (1,7.5) {}; \node (v10) at (2.5,7.5) {};
\node [scale=0.8] at (-2.5,3.5) {$6$}; \node[scale=0.8] at (-3,5.2) {$5$}; \node[scale=0.8] at (-1.2,3.3) {$9$}; \node[scale=0.8] at (0.5,3.25) {$12$}; \node[scale=0.8] at (2.2,3.7) {$15$}; \node[scale=0.8] at (-3,1.7) {$8$};
\node[scale=0.8] [above] at (-3,7.5) {}; \node[scale=0.8] [above] at (-2,7.5) {}; \node[scale=0.8] [above] at (-0.5,7.5) {}; \node [scale=0.8][above] at (-4.8,7.5) {}; \node [scale=0.8][below] at (-4.5,-1) {}; \node [scale=0.8][below]at (-2,-1) {}; \node [scale=0.8][below] at (0,-1) {}; \node [scale=0.8][below] at (2,-1) {}; \node [scale=0.8][above] at (1,7.5) {}; \node [scale=0.8][above] at (2.5,7.5) {};
\node at (-2.5,3.5) {}; \node at (-3,5.2) {}; \node at (-1.2,3.3) {}; \node at (0.5,3.25) {}; \node at (2.2,3.7) {}; \node at (-3,1.7) {}; \draw[fill] (-4,3) circle [radius=0.11]; \draw[fill] (v1) circle [radius=0.11]; \draw[fill] (v7) circle [radius=0.11]; \draw[fill] (v9) circle [radius=0.11]; \draw[fill] (v14) circle [radius=0.11]; \draw[fill] (v1) circle [radius=0.11]; \draw[fill] (v2) circle [radius=0.11]; \draw[fill] (v3) circle [radius=0.11]; \draw[fill] (v4) circle [radius=0.11]; \draw[fill] (v5) circle [radius=0.11]; \draw[fill] (v6) circle [radius=0.11]; \draw[fill] (v8) circle [radius=0.11]; \draw[fill] (v10) circle [radius=0.11]; \draw[fill] (v11) circle [radius=0.11]; \draw[fill] (v12) circle [radius=0.11]; \draw[fill] (v13) circle [radius=0.11]; \draw[fill] (v15) circle [radius=0.11];
\draw plot[smooth, tension=1] coordinates {(v1) (-2.5,5) (-3.5,4) (v2)}[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw plot[smooth, tension=1] coordinates {(v1) (-2,4.5) (-3,3.5) (v2)}[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}];
\draw (-3,7.5) -- (-1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-2,7.5) -- (-1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-0.5,7.5) -- (-1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}];
\draw (-4.8,7.5)-- (-4,3)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-1.5,5.5) -- (-1.5,1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-4,3) -- (-1.5,1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}];
\draw (1,7.5)--(1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (2.5,7.5) -- (1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (1.5,5.5) -- (-1.5,1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-4,3) -- (-4.5,-1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-1.5,1) -- (-2,-1)[postaction={decorate, decoration={markings,mark=at position .65 with {\arrow[black]{stealth}}}}]; \draw (0,-1) -- (-1.5,1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrowreversed[black]{stealth}}}}]; \draw (1.5,5.5) -- (2,1.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (2,1.5) -- (2,-1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}];
\node (v17) at (6.5,7.5) {}; \node [right][scale=0.8] at (6.5,3.5) {$19$}; \node (v16) at (6.5,-1) {}; \draw[fill] (v16) circle [radius=0.11]; \draw[fill] (v17) circle [radius=0.11]; \draw (6.5,-1) -- (6.5,7.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrowreversed[black]{stealth}}}}]; \node (v18) at (4.5,7.5) {}; \node [above][scale=0.8] at (4.5,7.5) {}; \node (v19) at (4.5,-1) {}; \node [below][scale=0.8] at (4.5,-1) {}; \node (v20) at (4.5,3.25) {}; \draw[fill] (v18) circle [radius=0.11]; \draw[fill] (v19) circle [radius=0.11];
\draw[fill] (v20) circle [radius=0.11]; \draw (4.5,-1) -- (4.5,3.25)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrowreversed[black]{stealth}}}}]; \draw (4.5,3.24) -- (4.5,7.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrowreversed[black]{stealth}}}}];
\node [scale=0.8]at (-5,6.5) {$1$}; \node [scale=0.8]at (-3,6.5) {$2$}; \node [scale=0.8]at (-1.5,7) {$3$}; \node [scale=0.8]at (-0.5,6.5) {$4$}; \node [scale=0.8]at (0.6,6.5) {$10$}; \node [scale=0.8]at (2.6,6.5) {$11$}; \node [scale=0.8]at (5,6) {$17$}; \node [scale=0.8]at (-4.7,0.5) {$7$}; \node [scale=0.8]at (-2.5,0) {$13$}; \node [scale=0.8]at (0,0) {$14$}; \node [scale=0.8]at (2.5,0) {$16$}; \node [scale=0.8]at (5,0.5) {$18$}; \end{tikzpicture}
Figure $2$. A POP-graph \end{center}
A basic result is the following. \begin{thm}[\cite{[HLY16]}]\label{M} There is a bijection between POP-graphs and equivalence classes of processive plane graphs. \end{thm} The POP-graph in Fig $2$ corresponds to the processive plane graph in Fig $1$.
\section{Planar order on verties} In this section, we will prove our main result. Before that we need some preliminaries.
From now on, we fix be a POP-graph $(G,\prec)$. For a vertex $v$ of $(G,\prec)$, the set $I(v)$ of incoming edges and the set $O(v)$ of outgoing edges are linearly ordered by $\prec$. We introduce some notations when $I(v)$ or $O(v)$ are not empty: $$i^-(v)=min\ I(v),$$ $$i^+(v)=max\ I(v),$$ $$o^-(v)=min\ O(v),$$ $$o^+(v)=max\ O(v).$$
The following lemma is a result first proved in \cite{[LY16]}. \begin{lem}\label{P_2 to U_2} Let $v$ be a vertex of $(G,\prec)$. If the degree of $v$ is not one, then $o^-(v)=i^+(v)+1$ under the linear order $\prec$. \end{lem} \begin{proof} Notice that $G$ is a processive graph, then $deg(v)\neq 1$ implies that $I(v)\neq\emptyset$ and $O(v)\neq\emptyset$. Thus both $i^+(v)$ and $o^-(v)$ exist. Now we prove $o^-(v)=i^+(v)+1$ by contradiction. Suppose there exists an edge $e$, such that $i^+(v)\prec e\prec o^-(v)$. Since $i^+(v)\rightarrow o^-(v)$, then by $(P_2)$ we have $i^+(v)\rightarrow e$ or $e\rightarrow o^-(v)$. If $i^+(v)\rightarrow e$, then there must exists an edge $e'\in O(v)-\{o^-(v)\}$, such that $e'\rightarrow e$ or $e'=e$. Thus $e'\preceq e$, which contradicts with $e\prec o^-(v)$. Otherwise, $e\rightarrow o^-(v)$, then there must exist an edge $e''\in I(v)-\{i^+(v)\}$ such that $e\rightarrow e''$ or $e''=e$. Then $e\preceq e''$, which contradicts with $i^+(v)\prec e$. \end{proof} Lemma \ref{P_2 to U_2} shows that for any vertex $v$, $\overline{E(v)}=\overline{I(v)}\sqcup \overline{O(v)}$, where $E(v)$ is the set of incident edges of $v$ and $\overline{X}$ denotes the interval of subset $X$ in a poset. Due to Lemma \ref{P_2 to U_2}, we can define a linear order $\prec_V$ on the vertex set $V(G)$. For any two different vertices $v_1, v_2$ of $G$, $v_1\prec_V v_2$ if and only if one of the following conditions is satisfied:
$(1)$ $I^+(v_1)\prec I^+(v_2)$, $(2)$ $I^+(v_1)\prec O^-(v_2)$, $(3)$ $O^-(v_1) \preceq I^+(v_2)$, $(4)$ $O^-(v_1)\prec O^-(v_2)$.\\
\noindent We write $v_1\preceq_V v_2$ if $v_1=v_2$ or $v_1\prec_V v_2$. The following Theorem is our main result. \begin{thm}\label{main} For any POP-graph $(G,\prec)$, $\preceq_V$ defines a planar order on the vertex poset $(V(G),\rightarrow)$. \end{thm} \begin{proof}$(1)$ $\prec_V$ satisfies $(P_1)$. If $v_1\rightarrow v_2$, then there exist $e_i\in E(G)$ $(1\leq i\leq n)$ such that $v_1=s(e_1)$, $v_2=t(e_n)$ and $t(e_i)=s(e_{i+1})$ for $(1\leq i\leq n-1)$, which implies that $o^-(v)\preceq e_1\preceq e_n\preceq i^+(v_2)$. Thus $o^-(v_1)\preceq i^+(v_2)$, then by definition of $\prec_V$, we have $v_1\prec_V v_2$.
$(2)$ $\prec_V $ satisfies $(P_2)$. Suppose $v_1\prec_V v_2\prec_V v_3$ and $v_1\rightarrow v_3$, then $o^-(v_1)$ and $i^+(v_3)$ exist and $o^-(v_1)\prec i^+(v_3)$. We have four cases:
\textbf{Case 1:} $v_1$ is a source and $v_3$ is a sink. In this case, notice that $G$ is processive, then by Definition $2$, $\{o^-(v_1)\}=O(v_1)$ and $\{i^+(v_3)\}=I(v_3)$. So $v_1\rightarrow v_3$ implies that $o^-(v_1)\rightarrow i^+(v_3)$. Let $e=i^+(v_2)$ or $o^-(v_2)$, then $v_1\prec_V v_2\prec_V v_3$ implies that $o^-(v_1)\prec e\prec i^+(v_3)$ or $o^-(v_1)= e$ or $e= i^+(v_3)$. In the first case, by $(P_2)$, we have $o^-(v_1)\rightarrow e$ or $ e\rightarrow i^+(v_3)$, which implies that $v_1\rightarrow v_2$ or $v_2\rightarrow v_3$. In the second case, we have $v_1\rightarrow v_2$, and in the third case, we have $v_2\rightarrow v_3$.
\textbf{Case 2:} $v_1$ is not a source and $v_3$ is a sink. In this case, $i^+(v_1)$ exists and by Definition $2$, $\{i^+(v_3)\}=I(v_3)$. So $v_1\rightarrow v_3$ implies that $i^+(v_1)\rightarrow i^+(v_3)$. Let $e=i^+(v_2)$ or $o^-(v_2)$, then $v_1\prec_V v_2\prec_V v_3$ implies that $i^+(v_1)\prec e\prec i^+(v_3)$ or $e= i^+(v_3)$. In the first case, by $(P_2)$, we have $i^+(v_1)\rightarrow e$ or $ e\rightarrow i^+(v_3)$, which implies that $v_1\rightarrow v_2$ or $v_2\rightarrow v_3$. In the second case, we have $v_2\rightarrow v_3$.
\textbf{Case 3:} $v_1$ is a source and $v_3$ is not a sink. This case is similar to case $(2)$.
\textbf{Case 4:} $v_1$ is not a source and $v_3$ is not a sink. In this case, both $i^+(v_1)$ and $o^-(v_2)$ exist and $v_1\rightarrow v_3$ implies that $i^+(v_1)\rightarrow o^-(v_3)$. Let $e=i^+(v_2)$ or $o^-(v_2)$, then $v_1\prec_V v_2\prec_V v_3$ implies that $i^+(v_1)\prec e\prec i^+(v_3)$. By $(P_2)$, we have $i^+(v_1)\rightarrow e$ or $ e\rightarrow o^-(v_3)$, which implies that $v_1\rightarrow v_2$ or $v_2\rightarrow v_3$. \end{proof} Figure $3$ shows the planar order on the vertex poset of the POP-graph in Fig $2$. \begin{center}
\begin{tikzpicture}[scale=0.4] \node (v2) at (-4,3) {}; \draw[fill] (-1.5,5.5) circle [radius=0.11]; \node (v1) at (-1.5,5.5) {}; \node (v7) at (-1.5,1) {}; \node (v9) at (1.5,5.5) {}; \node (v14) at (2,1.5) {}; \node (v3) at (-3,7.5) {}; \node (v4) at (-2,7.5) {}; \node (v5) at (-0.5,7.5) {}; \node (v6) at (-4.8,7.5) {}; \node (v11) at (-4.5,-1) {}; \node (v12) at (-2,-1) {}; \node (v13) at (0,-1) {}; \node (v15) at (2,-1) {}; \node (v8) at (1,7.5) {}; \node (v10) at (2.5,7.5) {};
\node [scale=0.8] at (-2.5,3.5) {}; \node[scale=0.8] at (-3,5.2) {}; \node[scale=0.8] at (-1.2,3.3) {}; \node[scale=0.8] at (0.5,3.25) {}; \node[scale=0.8] at (2.2,3.7) {}; \node[scale=0.8] at (-3,1.7) {};
\node[scale=0.8] [above] at (-3,7.5) {$2$}; \node[scale=0.8] [above] at (-2,7.5) {$3$}; \node[scale=0.8] [above] at (-0.5,7.5) {$4$}; \node [scale=0.8][above] at (-4.8,7.5) {$1$}; \node [scale=0.8][below] at (-4.5,-1) {$7$}; \node [scale=0.8][below]at (-2,-1) {$12$}; \node [scale=0.8][below] at (0,-1) {$13$}; \node [scale=0.8][below] at (2,-1) {$15$}; \node [scale=0.8][above] at (1,7.5) {$8$}; \node [scale=0.8][above] at (2.5,7.5) {$9$};
\node at (-2.5,3.5) {}; \node at (-3,5.2) {}; \node at (-1.2,3.3) {}; \node at (0.5,3.25) {}; \node at (2.2,3.7) {}; \node at (-3,1.7) {}; \draw[fill] (-4,3) circle [radius=0.11]; \draw[fill] (v1) circle [radius=0.11]; \draw[fill] (v7) circle [radius=0.11]; \draw[fill] (v9) circle [radius=0.11]; \draw[fill] (v14) circle [radius=0.11]; \draw[fill] (v1) circle [radius=0.11]; \draw[fill] (v2) circle [radius=0.11]; \draw[fill] (v3) circle [radius=0.11]; \draw[fill] (v4) circle [radius=0.11]; \draw[fill] (v5) circle [radius=0.11]; \draw[fill] (v6) circle [radius=0.11]; \draw[fill] (v8) circle [radius=0.11]; \draw[fill] (v10) circle [radius=0.11]; \draw[fill] (v11) circle [radius=0.11]; \draw[fill] (v12) circle [radius=0.11]; \draw[fill] (v13) circle [radius=0.11]; \draw[fill] (v15) circle [radius=0.11];
\draw plot[smooth, tension=1] coordinates {(v1) (-2.5,5) (-3.5,4) (v2)}[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw plot[smooth, tension=1] coordinates {(v1) (-2,4.5) (-3,3.5) (v2)}[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}];
\draw (-3,7.5) -- (-1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-2,7.5) -- (-1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-0.5,7.5) -- (-1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}];
\draw (-4.8,7.5)-- (-4,3)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-1.5,5.5) -- (-1.5,1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-4,3) -- (-1.5,1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}];
\draw (1,7.5)--(1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (2.5,7.5) -- (1.5,5.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (1.5,5.5) -- (-1.5,1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-4,3) -- (-4.5,-1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (-1.5,1) -- (-2,-1)[postaction={decorate, decoration={markings,mark=at position .65 with {\arrow[black]{stealth}}}}]; \draw (0,-1) -- (-1.5,1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrowreversed[black]{stealth}}}}]; \draw (1.5,5.5) -- (2,1.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}]; \draw (2,1.5) -- (2,-1)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrow[black]{stealth}}}}];
\node (v17) at (6.5,7.5) {}; \node [right][scale=0.8] at (6.5,3.5) {}; \node (v16) at (6.5,-1) {}; \draw[fill] (v16) circle [radius=0.11]; \draw[fill] (v17) circle [radius=0.11]; \draw (6.5,-1) -- (6.5,7.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrowreversed[black]{stealth}}}}]; \node (v18) at (4.5,7.5) {}; \node [above][scale=0.8] at (4.5,7.5) {$16$}; \node (v19) at (4.5,-1) {}; \node [below][scale=0.8] at (4.5,-1) {$18$}; \node (v20) at (4.5,3.25) {}; \draw[fill] (v18) circle [radius=0.11]; \draw[fill] (v19) circle [radius=0.11];
\draw[fill] (v20) circle [radius=0.11]; \draw (4.5,-1) -- (4.5,3.25)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrowreversed[black]{stealth}}}}]; \draw (4.5,3.24) -- (4.5,7.5)[postaction={decorate, decoration={markings,mark=at position .5 with {\arrowreversed[black]{stealth}}}}];
\node [scale=0.8]at (-2.2,5.6) {$5$}; \node [scale=0.8]at (-4.7,3) {$6$}; \node [scale=0.8]at (-2.2,1) {$11$}; \node [scale=0.8]at (0.8,5.5) {$10$}; \node [scale=0.8]at (1.3,1.5) {$14$}; \node [scale=0.8]at (3.8,3.25) {$17$}; \node [scale=0.8, above]at (6.5,7.5) {$19$}; \node [scale=0.8, below]at (6.5,-1) {$20$}; \end{tikzpicture}
Figure $3$. Induced planar order on vertices. \end{center}
Theorem \ref{main} shows that for any processive graph, each conjugate order of its edge poset induces a conjugate order of its vertex poset. However, in general, the converse is not true. Therefore, together with Theorem 2.1, Theorem 3.1 demonstrates that edge poset is more effective tool than vertex poset in the study of upward planarity. It is worth to mention that Fraysseix and Mendez, in a different but essentially equivalent context, also showed a similar judgement (in their final remark of \cite{[FM96]}). In our subsequent work, we will show that for a transitive reduced processive graph (directed covering graph of a poset), a planar order on its vertex set can naturally induce a planar order on its edge set, which is essentially related the work in \cite{[FM96]}.
\section*{Acknowlegement} This work is supported by "the Fundamental Research Funds for the Central Universities".
\par
\textbf{Xuexing Lu}
\\ Email: [email protected]
\end{document} | arXiv |
\begin{document}
\title{$e$-basis Coefficients of Chromatic Symmetric Functions}
\begin{abstract}
A well-known result of Stanley's shows that given a graph $G$ with chromatic symmetric function expanded into the basis of elementary symmetric functions as $X_G = \sum c_{\lambda}e_{\lambda}$, the sum of the coefficients $c_{\lambda}$ for $\lambda$ with $\lambda_1' = k$ (equivalently those $\lambda$ with exactly $k$ parts) is equal to the number of acyclic orientations of $G$ with exactly $k$ sinks.
However, more is known. The \emph{sink sequence} of an acyclic orientation of $G$ is a tuple $(s_1,\dots,s_k)$ such that $s_1$ is the number of sinks of the orientation, and recursively each $s_i$ with $i > 1$ is the number of sinks remaining after deleting the sinks contributing to $s_1,\dots,s_{i-1}$. Equivalently, the sink sequence gives the number of vertices at each level of the poset induced by the acyclic orientation.
A lesser-known follow-up result of Stanley's determines certain cases in which we can find a sum of $e$-basis coefficients that gives the number of acyclic orientations of $G$ with a given partial sink sequence. Of interest in its own right, this result also admits as a corollary a simple proof of the $e$-positivity of $X_G$ when the stability number of $G$ is $2$.
In this paper, we prove a vertex-weighted generalization of this follow-up result, and conjecture a stronger version that admits a similar combinatorial interpretation for a much larger set of $e$-coefficient sums of chromatic symmetric functions. In particular, the conjectured formula would give a combinatorial interpretation for the sum of the coefficients $c_{\lambda}$ with prescribed values of $\lambda_1'$ and $\lambda_2'$ for any unweighted claw-free graph.
\end{abstract}
\section{Introduction}
Given a graph $G$, its chromatic symmetric function $X_G$ is defined as $$ X_G = \sum_{\kappa} \prod_{v \in V(G)} x_{\kappa(v)} $$ where the sum ranges over all $\kappa: V(G) \rightarrow \mathbb{N}$ that are proper colorings of $G$. Since its introduction in the 1990s by Stanley \cite{stanley}, research has connected the chromatic symmetric function and its generalizations to objects in algebraic and geometric combinatorics, including the geometry of Hessenberg varieties \cite{unit, cho2022, wachs} and LLT polynomials \cite{abreu, per2022, tom}.
Of particular interest is the $e$-basis expansion of the chromatic symmetric functions, in large part due to the Stanley-Stembridge conjecture \cite{stanley}, which after a reduction by Guay-Paquet \cite{guay} claims that the chromatic symmetric function of any unit interval graph is $e$-positive. In general, explicit formulas for specific $e$-basis coefficients of the chromatic symmetric function are either only known for certain special coefficients \cite{kali} or in a form that does not directly give their sign \cite{chownote}. The $e$-positivity of some special subclasses of unit interval graphs is known, such as lollipops and their generalizations \cite{centered, dahlberg2019, dahl}, graphs corresponding to Hessenberg functions with certain properties \cite{abreu, cho2022, colm}, and those with certain forbidden induced subgraphs \cite{foley, hamel2019}.
However, a seminal result of Stanley's original paper shows that certain \emph{sums} of $e$-basis coefficients are positive for any graph. In particular, he demonstrates \begin{theorem}[Theorem 3.3, \cite{stanley}]\label{thm:stanley3.3}
If $X_G = \sum c_{\lambda}e_{\lambda}$, then the number of acyclic orientations of $G$ with exactly $k$ sinks (vertices with no incident outgoing edges) is equal to $$ \sum_{l(\lambda) = k} c_{\lambda}. $$
\end{theorem} Indeed, since Stanley's original proof relied on passing to quasisymmetric generating functions of posets, recent results proving the same result in a more combinatorial manner have been noteworthy \cite{delcon, hwang}.
However, Stanley provided another theorem even more general than this, though rarely cited in the literature. The \emph{sink sequence} of an acyclic orientation $o$ is the tuple $ss(o) = (s_1, s_2, \dots)$, where $s_1$ is the number of sinks of $o$, $s_2$ is the number of sinks of $o$ after deleting the $s_1$ original sinks, and so on. Furthermore, call a partition $\nu$ of $s \leq |V(G)|$ \emph{allowable} if there exist disjoint stable sets $S_1, \dots, S_{l(\nu)}$ of $V(G)$ such that $|S_i| = \nu_i$. Then Stanley showed \begin{theorem}[Theorem 3.4, \cite{stanley}]\label{thm:stanley3.4}
Let $\mu$ be a partition of $r \leq |V(G)|$ with $k$ parts. Suppose that for every allowable $\nu$, either \begin{itemize}
\item $\nu$ does not dominate $\mu$, or
\item $\nu_i = \mu_i$ for $i \in \{1,2,\dots,k\}$. \end{itemize}
Let $X_G = \sum c_{\lambda}e_{\lambda}$. Then for any $j \in \{0, 1, \dots, |V(G)|-r\}$ (with $j = 0$ possible only if $r = |V(G)|$) the number of acyclic orientations of $G$ with sink sequence of the form $(\mu_1, \dots, \mu_k, j, \dots)$ is equal to $$ \sum_{\lambda} c_{\lambda} $$ where the sum ranges over all $\lambda$ such that $\lambda_i' = \mu_i$ for $i \in \{1,\dots,k\}$ and $\lambda_{k+1}' = j$ (here $\lambda'$ is the transpose of $\lambda$).\footnote{The originally published version of this theorem is not quite correct as stated because it did not mention the second bullet point above. The version given here was described in an erratum \cite{erratum}.}
\end{theorem}
This theorem effectively generalizes the previous one to more restricted sums of $e$-basis coefficients (Theorem \ref{thm:stanley3.3} could be viewed as the ``$\mu = \emptyset$ case" of Theorem \ref{thm:stanley3.4}), but only for sums dictated by those $\mu$ which satisfy certain properties depending on $G$.
In previous work by the first author and Spirkl \cite{delcon}, we generalized the chromatic symmetric function to \emph{vertex-weighted graphs}, and showed that in this setting the chromatic symmetric function admits a natural deletion-contraction relation. We derived a generalization of Theorem \ref{thm:stanley3.3} to vertex-weighted graphs, and provided a novel proof using deletion-contraction that is analogous to Stanley's famous proof that $(-1)^{|V(G)|}\chi_G(-1)$ enumerates acyclic orientations of $G$ \cite{stanleyacyclic}.
This paper has two main parts. First, we prove Theorem \ref{thm:main}, a generalization of Theorem \ref{thm:stanley3.4} to vertex-weighted graphs. We begin by generalizing the notion of a sink sequence to the vertex-weighted setting, allowing us to properly extend the ideas used in \cite{delcon}. This makes easier the proof of Theorem \ref{thm:main}, a combinatorial argument via an inductive edge deletion-contraction proof.
Second, we introduce Conjecture \ref{conjecture}, a conjectured generalization of Theorem \ref{thm:main} when $l(\mu) = 1$. This generalization allows for a much wider range of acceptable $\mu$ than those that satisfy the two bullet points of Theorem \ref{thm:stanley3.4}. In particular, Conjecture \ref{conjecture} implies that every $\mu$ with one part is acceptable in unweighted claw-free graphs, implying a combinatorial interpretation for all associated $e$-coefficient sums. Thus, if Conjecture \ref{conjecture} can be generalized to $\mu$ of arbitrary length, this could provide a new combinatorial interpretation of any individual $e$-basis coefficient of the chromatic symmetric functions of unweighted claw-free graphs, or at least a substantial subclass of them including unit interval graphs.
The paper is organized as follows: in Section 2, we introduce necessary background in symmetric function theory and graph theory. In Section 3, we prove Theorem \ref{thm:main}, the vertex-weighted generalization of Stanley's Theorem \ref{thm:stanley3.4}. In Section 4, we introduce Conjecture \ref{conjecture} with an illustrative example, and provide supporting evidence in the form of proofs of two special cases. We also discuss the application of Conjecture \ref{conjecture} to unweighted claw-free graphs. We end with concluding remarks in Section 5.
\section{Background}
Throughout this paper, $\mathbb{N}$ will be used to mean positive integers (not including zero), and $\mathcal{P}(\mathbb{N})$ means the set of all sets of positive integers.
\subsection{Partitions and Symmetric Functions}
A \emph{partition} $\pi = \{S_1, \dots, S_k\}$ of a set $S$ is a set of nonempty disjoint subsets of $S$ whose union is all of $S$ (that is, $S_1 \sqcup \dots \sqcup S_k = S$), and we write $\pi \vdash S$ and $|\pi| = |S|$. The elements of $\pi$ are called \emph{blocks} of the partition.
An \emph{integer partition} is a tuple $\lambda = (\lambda_1, \dots, \lambda_k)$ of positive integers satisfying $\lambda_1 \geq \dots \geq \lambda_k$. Where $\sum \lambda_i = n$, we say that $\lambda$ is a \emph{partition of $n$}, and we write $\lambda \vdash n$ or $|\lambda| = n$. Each integer in the tuple $\lambda$ is called a \emph{part} of $\lambda$, and the number of parts of $\lambda$ is $l(\lambda)$. We let $n_i(\lambda)$ be the number of occurrences of $i$ as a part of $\lambda$. For example, if $\mu = (3,2,2,1)$, then $|\mu| = 8$, $l(\mu) = 4$, $n_1(\mu) = 1$, and $n_2(\mu) = 2$.
An integer partition $\lambda \vdash d$ may also be written as $\lambda = 1^{n_1(\lambda)} \dots d^{n_d(\lambda)}$, giving the multiplicity of each part. In particular, $1^k = \underbrace{(1,\dots,1)}_{\text{ k ones}}$.
Given an integer partition $\lambda = (\lambda_1, \dots, \lambda_k)$, if $m \geq \lambda_1$ we write $(m,\lambda)$ to indicate the partition $(m,\lambda_1,\dots,\lambda_k)$, and if $r \leq \lambda_k$ we write $(\lambda,r)$ to indicate the partition $(\lambda_1,\dots,\lambda_k,r)$. We write $\lambda+1^n$ to mean the partition formed by adding $1$ to each of the first $n$ parts of $\lambda$, extending $\lambda$ by $0$s if $l(\lambda) < n$. For example, $(3,2,1) + 1^2 = (4,3,1)$, and $(3,2,1)+1^5 = (4,3,2,1,1)$. When $l(\lambda) \geq n$, we likewise write $\lambda-1^n$ to indicate the partition formed by subtracting $1$ from the first $n$ parts of $\lambda$, removing any arising $0$s and rearranging the parts into weakly decreasing order if necessary. For example, $(3,2,2)-1^2=(2,2,1)$, and $(3,1,1) - 1^2 = (2,1)$.
Given integer partitions $\lambda$ and $\mu$ with $|\lambda| = |\mu|$, we say that $\lambda$ \emph{dominates} $\mu$ if for each \\ $i \in \{1, \dots, l(\mu)\}$, we have $\sum_{j=1}^i \lambda_j \geq \sum_{j=1}^i \mu_j$.
Given an integer partition $\lambda$, its \emph{transpose} $\lambda'$ is the partition with parts $\lambda'_i = \sum_{j=i}^{\infty} n_j(\lambda)$. In particular, $\lambda'_1 = l(\lambda)$.
Given $\pi \vdash S$, its corresponding integer partition $\lambda(\pi) \vdash |S|$ has parts equal to the cardinalities of the blocks of $\pi$.
The following information about symmetric function theory can be found in many textbooks, such as \cite{mac, stanleybook}. A \emph{symmetric function} is a power series $f(x_1,x_2,\dots) \in \mathbb{C}[[x_1,x_2,\dots]]$ of finite degree such that for every $\sigma \in S_{\mathbb{N}}$ we have $f(x_1,x_2,\dots) = f(x_{\sigma(1)}, x_{\sigma(2)}, \dots)$. The space of symmetric functions, denoted $\Lambda$, may be recognized as a graded vector space $\Lambda = \bigoplus_{i=0}^{\infty} \Lambda^i$, where $\Lambda^i$ consists of those symmetric functions which are homogeneous of degree $i$. Each $\Lambda^i$ is finite-dimensional, with dimension equal to the number of integer partitions of $i$, and bases of $\Lambda^i$ (and thus of $\Lambda$) are typically indexed by these integer partitions. Some of the most commonly used bases are \begin{itemize}
\item The \emph{monomial basis}, defined by
$$
m_{\lambda} = \sum x_{i_1}^{\lambda_1} \dots x_{i_{l(\lambda)}}^{\lambda_{l(\lambda)}}
$$
where the sum contains one copy of each monomial formed as $(i_1, \dots, i_{l(\lambda)})$ ranges across all tuples of distinct positive integers.
\item The \emph{augmented} monomial basis, defined by
$$
\widetilde{m}_{\lambda} = \left(\prod_{i=1}^{\infty} n_i(\lambda)!\right)m_{\lambda}.
$$
\item The \emph{elementary symmetric function} basis, defined by
$$
e_n = \sum_{i_1 < \dots < i_n} x_{i_1} \dots x_{i_n}, \, e_{\lambda} = e_{\lambda_1} \dots e_{\lambda_{l(\lambda)}}.
$$ \end{itemize}
If $\{b_{\lambda}\}$ is a basis of symmetric functions indexed by integer partitions, $\mu$ is a fixed integer partition, and $f$ is any symmetric function, $[b_{\mu}]f$ denotes the coefficient of $b_{\mu}$ when $f$ is expanded into the $b$-basis. The function $f$ is said to be \emph{$b$-positive} if $[b_{\mu}]f \geq 0$ for every integer partition $\mu$.
\subsection{Graphs}
We use basic graph theory terminology as given in \cite{diestel}. A graph $G = (V(G),E(G))$ consists of a set $V$ of \emph{vertices} and a set $E$ of (unordered) vertex pairs called \emph{edges}. Given an edge $v_1v_2 \in E$ for $v_1,v_2 \in V$, we say that $v_1$ and $v_2$ are the \emph{endpoints} of $e$, and that $e$ is \emph{incident} with $v_1$ and $v_2$. In this paper graphs need not be simple, meaning that we may have multiple edges with the same two vertices (\emph{multi-edges}) or an edge containing the same vertex twice (\emph{loops}).
Given $S \subseteq V(G)$, we define $G \backslash S$ to be the graph $(V(G) \backslash S, E(G) \backslash E(S))$, where $E(S)$ is the set of edges with at least one endpoint in $S$. We define $G[S]$ to be the graph $(S,E'(S))$, where $E'(S) \subseteq E(G)$ is the set of edges with both endpoints in $S$. We call $G[S]$ the \emph{subgraph of $G$ induced by $S$}, and say that $G[S]$ is an \emph{induced subgraph} of $G$.
A set $S \subseteq V(G)$ is called a \emph{stable set} if there are no edges of $G$ with both endpoints in $S$. A partition $\pi \vdash V(G)$ is called a \emph{stable partition} if each block of $\pi$ is a stable set.
An \emph{orientation} $\gamma$ of $G$ is an assignment of a direction to each edge $e$ of $G$ (that is, an ordering of the two vertices comprising $e$), and we will use $(G, \gamma)$ to denote a graph with orientation $\gamma$ applied. We denote an oriented edge as $v_1 \rightarrow v_2$ and say that the edge points from $v_1$ to $v_2$. An orientation of $G$ is \emph{acyclic} if it contains no directed cycle (that is, the graph has no loops, and for each vertex $v \in V(G)$, there do not exist vertices $v_1, \dots, v_k$ for some $k \geq 1$ such that all of the oriented edges $v \rightarrow v_1, v_1 \rightarrow v_2, \dots, v_k \rightarrow v$ are present in the orientation).
A \emph{sink} of an orientation of a graph $G$ is any vertex $v \in V(G)$ such that no edges point away from $v$ (in particular, an isolated vertex is a sink of every orientation).
Given a graph $G$ and an edge $e = v_1v_2$, the graph $G \backslash e = (V,E \backslash e)$ is the graph of $G$ with the edge $e$ \emph{deleted}. We define the \emph{contraction} of $G$ by $e$ as $G/e = G \backslash e$ if $e$ is a loop, and otherwise $G/e = (V(G)\backslash\{v_1,v_2\}\cup v^*, E(G)/e)$ where $E(G)/e$ consists of all edges of $E(G)$, except that wherever $v_1$ or $v_2$ occurs as an endpoint of an edge, it is replaced by $v^*$. Intuitively, we identify the endpoints of $e$ to a single vertex, and adjust all other edges accordingly.
\subsection{Vertex-Weighted Graphs}
A \emph{vertex-weighted graph} $(G,w)$ consists of a graph $G$, and a vertex weight function $w: V(G) \rightarrow \mathbb{N}$. All previous definitions hold identically for vertex-weighted graphs, with the exception that when a vertex-weighted graph $(G,w)$ is contracted by a non-loop edge $e = v_1v_2$, we give $G/e$ a new weight function $w/e$ satisfying $(w/e)(v^*) = w(v_1)+w(v_2)$, and for all other $v \in V(G/e) \backslash \{v^*\}$, $(w/e)(v) = w(v)$.
Given $S \subseteq V(G)$, we write $w(S) = \sum_{v \in S} w(S)$, and we say that the \emph{total weight} of $(G,w)$ is $w(V(G))$.
Since the usual definition of a graph may be captured by the special case where $w(v) = 1$ for all $v \in V(G)$, we will assume in this paper that all graphs are vertex-weighted.
\subsection{Graph Coloring}
Let $(G,w)$ be a vertex-weighted graph. A \emph{coloring} of $G$ is a map $\kappa: V(G) \rightarrow \mathbb{N}$ such that whenever $v_1v_2 \in E(G)$, $\kappa(v_1) \neq \kappa(v_2)$. \begin{definition}[\cite{delcon, stanley}]\label{def:xgw} The \textbf{chromatic symmetric function} of a vertex-weighted graph $(G,w)$ is $$ X_{(G,w)} = \sum_{\kappa} \prod_{v \in V(G)} x_{\kappa(v)}^{w(v)} $$ where the sum ranges over all colorings $\kappa$ of $G$. \end{definition}
Although the chromatic symmetric function does not admit a direct edge deletion-contraction relation for unweighted graphs, for vertex-weighted graphs the following holds. \begin{lemma}[\cite{delcon}, Lemma 2]\label{lem:delcon}
If $(G,w)$ is a vertex-weighted graph, and $e$ is any edge of $G$, then $$ X_{(G,w)} = X_{(G \backslash e, w)}-X_{(G/e,w/e)} $$
\end{lemma}
\section{Generalizing Theorem \ref{thm:stanley3.4} to Vertex-Weighted Graphs}
In \cite{delcon}, Spirkl and the first author generalized Theorem \ref{thm:stanley3.3} to vertex-weighted graphs. One of the main challenges was correctly generalizing the notion of counting sinks of acyclic orientations to vertex-weighted graphs $(G,w)$: should a sink vertex $v$ be counted once, or with weight $w(v)$? The answer turns out to be something in between these two: we need to pick not just acyclic orientations, but also sink maps that assign each sink $v$ a nonempty subset of $\{1,2,\dots,w(v)\}$ (intuitively, we view the vertex as consisting of $w(v)$ ``mini-vertices", and we choose a nonempty subset of these to be the ``true" sinks). Likewise, we will see that care needs to be taken in generalizing Theorem \ref{thm:stanley3.4}.
Theorem \ref{thm:stanley3.4}, instead of simply counting sinks of an acyclic orientation, now enumerates the sequence of sinks obtained by recursively deleting the sinks of an acyclic orientation and considering the sinks of the remainder of the orientation. Already there is a minor difficulty in combining this with the notion of sink maps above: intuitively if a sink $v$ is assigned a nontrivial subset of $\{1,2,\dots,w(v)\}$, we would like to then consider the graph where the weight of $v$ is decreased, and consists only of the remainder of $\{1,2,\dots,w(v)\}$. The difficulty here is that it will be beneficial to keep track of when different subsets of $\{1,2,\dots,w(v)\}$ are used, but that is not possible under the definition of vertex-weighted graphs. Therefore, it is necessary to extend the notion of weighted graphs.
\begin{definition}\label{def:xgwset}
A \textbf{set-weighted graph} $(G,\omega)$ consists of a graph $G$ and a map $\omega: V(G) \rightarrow \mathcal{P}(\mathbb{N})$ such that \begin{itemize}
\item For each $v \in V(G)$, the set $\omega(v)$ is nonempty and finite.
\item For each $n \in \mathbb{N}$, $n$ occurs as an element of at most one $\omega(v)$. \end{itemize}
We say that the \emph{integer weight} of $v \in V(G)$ is then $w(v) = |\omega(v)|$.
Given a set-weighted graph $(G,\omega)$ and an edge $e \in E(G)$, the \textbf{contraction} $(G,\omega)/e = (G/e, \omega/e)$ is defined analogously to contraction in vertex-weighted graphs, except that where $e = v_1v_2$ is a nonloop edge and $v^*$ is the vertex formed by contraction, we define $(\omega/e)(v^*) = \omega(v_1) \cup \omega(v_2)$, and for a vertex $v \neq v^*$ we have $(\omega/e)(v) = \omega(v)$.
\end{definition}
\begin{definition}\label{def:xgom}
The chromatic symmetric function of a set-weighted graph $(G,\omega)$ is given by $$ X_{(G,\omega)} = \sum_{\kappa} \prod_{v \in V(G)} x_{\kappa(v)}^{w(v)} $$
\end{definition}
It is straightforward to verify that Lemma \ref{lem:delcon} extends to a deletion-contraction relation on set-weighted graphs.
\begin{lemma}\label{lem:delconset}
If $(G,\omega)$ is a set-weighted graph, and $e$ is any edge of $G$, then $$ X_{(G,\omega)} = X_{(G \backslash e, \omega)}-X_{(G/e,\omega/e)} $$
\end{lemma}
Thus, we now label the ``mini-vertices" explicitly, without changing the fundamental notion of the integer-weighted chromatic symmetric function.
Now, the definitions that follow illustrate how we combine the sink sequences of Theorem \ref{thm:stanley3.4} with the notion above of sink maps as used in the generalization of Theorem \ref{thm:stanley3.3} in \cite{delcon}.
\begin{definition} Let $\ell\in\mathbb{N}$ and let $(G,\omega)$ be a set-weighted graph. An \textbf{$\ell$-step weight map} of $(G,\omega)$ is a function $S:V(G)\to (\mathcal{P}(\mathbb{N}))^{\ell}$ such that for all $v\in V(G)$, we have \[\bigsqcup_{i=1}^{\ell}S(v)_i\subseteq \omega(v)\] where $S(v)_i$ is the $i^{th}$ coordinate of $S(v)$ (note that this is a disjoint union, so each element of $\omega(v)$ occurs in at most one of the $S(v)_i$).
We define the \textbf{$\ell$-step weight sequence} of an $\ell$-step sink map $S$ to be $\wts(G,S)=(s_1,\ldots,s_\ell)$, where for all $i \in \{1,\dots,l\}$, we have
\[s_i=\sum_{v\in V(G)}|S(v)_i|.\]
\end{definition}
\begin{definition} Let $S$ be an $\ell$-step weight map on $(G, \omega)$. For $i \in \{0, \dots, \ell\}$, define set-weighted graphs $(G_i(S), \omega_i(S))$ (where we may suppress $S$ when it is clear) recursively as follows: \begin{itemize}
\item $(G_0, \omega_0) = (G, \omega)$.
\item Given $(G_{i-1}, \omega_{i-1})$:
\begin{itemize}
\item Set $V(G_i) = V(G_{i-1}) \backslash \{v \in V(G_{i-1}) : S(v)_{i-1} = \omega_{i-1}(v)\}$
\item Set $E(G_i)$ to be the set of all edges of $E(G_{i-1})$ with both endpoints in $V(G_i)$.
\item For each $v \in V(G_i)$, set $\omega_i(v) = \omega_{i-1}(v) \backslash S(v)_{i-1}$.
\end{itemize} \end{itemize}
\end{definition}
Intuitively, suppose we are given $G_{i-1}$. For all $v\in V(G_{i-1})$, we remove the mini-vertices given by $S(v)_i$, and we remove the ``whole" vertex if there is no mini-vertex left, and then define the resulting graph to be $G_i$.
Essentially, we care about when an $\ell$-step weight map $S$ yields a graph sequence $(G_i,\omega_i)$ that corresponds to the graphs and sinks that are recursively formed in computing the sink sequence of an acyclic orientation $\gamma$ of $(G,\omega)$. However, the definitions above do not depend inherently on such a choice of $\gamma$, because it will be easier for proofs to consider all choices of $\gamma$ and $S$ and discard those that do not work together.
\begin{definition} Given a set-weighted graph $(G,\omega)$ and an acyclic orientation $\gamma$ of $G$, we say that an $\ell$-step weight map $S$ of $(G,\omega)$ is \textbf{$\gamma$-admissible} if for all $i \in \{1,\dots,\ell\}$ and for all $v \in V(G)$, it holds that $S(v)_i\neq \varnothing$ if and only if $v \in V(G_{i-1})$ and $v$ is a sink of the restriction of $\gamma$ to $G_{i-1}$. When $S$ is $\gamma$-admissible, we will denote its corresponding weight sequence as $\wts(G,\gamma,S)$. \end{definition}
We also introduce notation for the specific sums of $e$-basis coefficients we will be looking at.
\begin{definition}\label{def:sigmamuj} Let $f\in \Lambda^d$ and write $f=\sum_{\lambda\vdash d}c_{\lambda}e_{\lambda}$. Let $\mu=(\mu_1,\ldots,\mu_\ell)\vdash r\leq d$. Then we define \[\sigma_{\mu}(f):=\sum_{\substack{\lambda\vdash d \\ \lambda'=(\mu_1,\ldots,\mu_\ell,\ldots)}}c_\lambda\] where the sum ranges over all $\lambda \vdash d$ such that the first $l$ parts of $\lambda'$ are nonzero and satisfy $\lambda'_i = \mu_i$ for all $i \in \{1,\dots,\ell\}$. \end{definition}
We also briefly state without proof a well-known property of symmetric function expansions (\cite[Theorem 7.4.4]{stanleybook}) that will be used repeatedly. \begin{lemma}[\cite{stanleybook}]\label{lem:edom}
Let $\lambda$ and $\mu$ be partitions of the same integer such that $[e_{\mu}]m_{\lambda} \neq 0$. Then $\mu$ dominates $\lambda'$.
\end{lemma}
Before proceeding to the main theorem of this section, we first prove auxiliary lemmas that will be necessary.
\begin{lemma}\label{lem:mu+1^k}
Let $\mu$ be a partition, and let $\lambda\vdash|\mu|$. Let $k\geq \mu_1$ be given. Then \[[e_\lambda]m_{\mu}=[e_{\lambda+1^k}]m_{(k,\mu)}\] where $\lambda+1^k$ is the partition formed by adding $1$ to the first $k$ parts of $\lambda$ (extending $\lambda$ with $0$s if necessary to make $l(\lambda) \geq k$), and if $\mu = (\mu_1, \dots, \mu_{l(\mu)})$ then $(k,\mu) = (k, \mu_1, \dots, \mu_{l(\mu)})$. \end{lemma}
\begin{proof}
It is well-known that for any integer partitions $\lambda$ and $\mu$ it holds that $[e_{\lambda}]m_{\mu} = [e_{\mu}]m_{\lambda}$ (that is, the transition matrix between these bases is symmetric) \cite[Corollary 7.4.2]{stanleybook}. Therefore, it is equivalent to show that \[[e_\mu]m_\lambda=[e_{(k,\mu)}]m_{\lambda+1^k}.\] Furthermore, since $[e_\mu]m_\lambda=[e_{(k,\mu)}]m_\lambda e_k$ and $e_k=m_{1^k}$, it suffices to show that \[[e_{(k,\mu)}]m_{\lambda+1^k}=[e_{(k,\mu)}]m_{\lambda}m_{1^k}.\] By expanding $m_\lambda m_{1^k}$, it is straightforward to verify that $[m_{\lambda+1^k}]m_\lambda m_{1^k}=1$. Let $\nu$ be such that $[m_\nu]m_\lambda m_{1^k}\neq 0$ and $\nu\neq \lambda+1^k$. Then again by expanding $m_\lambda m_{1^k}$, we may verify that $l(\nu)>k$. It follows that $(k,\mu)$ does not dominate $\nu'$, in which case it follows from Lemma \ref{lem:edom} that $[e_{(k,\mu)}]m_\nu=0$, and this finishes the proof. \end{proof}
\begin{lemma} \label{base-case-induction} Let $\mu$ and $\nu$ be partitions. Let $k\in\mathbb{N}$ be such that $k\geq \mu_1$ and $k\geq\nu_1$. Then \[\sigma_{(k,\nu)}(m_{(k,\mu)})=\sigma_\nu(m_\mu).\] \end{lemma}
\begin{proof}
Let $\lambda\vdash k+|\mu|$ be such that $\lambda'$ has the form $(k,\nu,\ldots)$ (so in particular $l(\lambda) = k$).
Then $\lambda_* = \lambda-1^k$ is a partition of $|\mu|$ such that $\lambda_*'$ has the form $(\nu,\ldots)$, and by the previous lemma, we have \[[e_\lambda]m_{(k,\mu)}=[e_{\lambda_*}]m_\mu.\]
Conversely, suppose $\lambda$ is a partition of $|\mu|$ such that $\lambda'$ has the form $(\nu,\ldots)$. Let $\lambda_*=\lambda+1^k$ be a partition of $k+|\mu|$. Note that since $k\geq\nu_1$, $\lambda_*'$ has the form $(k,\nu,\ldots)$. Again, using the previous lemma, we get \[[e_\lambda]m_\mu=[e_{\lambda_*}]m_{(k,\mu)}.\]
Therefore, we have $\sigma_{k,\nu}(m_{(k,\mu)})=\sigma_\nu(m_\mu)$, as desired. \end{proof}
Before the final lemmas, we introduce some more terminology that will be used in the lemmas and in the main theorem.
\begin{definition}\label{def:partialdom} Let $\mu=(\mu_1,\ldots,\mu_\ell)$ and $\nu=(\nu_1,\ldots,\nu_m)$ be finite sequences of positive integers (note that $\mu$ and $\nu$ need not be partitions). We say $\mu$ \textbf{partially dominates} $\nu$ if either $\mu_i=\nu_i$ for all $i \in \{1,\dots,\ell\}$, or there exists some $i \in \{1,\dots,\ell\}$ such that $\mu_1+\cdots+\mu_i>\nu_1+\cdots+\nu_i$ (where we take $\nu_j = 0$ if $j > m$). \end{definition}
In other words, $\mu$ partially dominates $\nu$ if $\nu$ does not dominate $\mu$ ``nontrivially".
\begin{definition}\label{def:maxallow} A partition $\mu=(\mu_1,\ldots,\mu_\ell)$ is \textbf{allowable} in a set-weighted graph $(G,\omega)$ if there exists $W \subseteq V(G)$ and a stable partition $(S_1,\ldots,S_\ell)$ of $W$ such that $\sum_{v\in S_i}w(v)=\mu_i$ for all $i \in \{1,\dots,\ell\}$.
A partition $\mu\vdash r\leq d$ is \textbf{maximal} in $(G,\omega)$ if $\mu$ partially dominates all allowable partitions in $(G,\omega)$ (not just allowable partitions of $r$). Note that $\mu$ need not be allowable. \end{definition}
\begin{definition}\label{def:sinkseq}
Let $\gamma$ be an acyclic orientation of $G$. For each $i\in\mathbb{N}$, recursively define graphs by $G^0 = G$ and $G^i$ the graph formed by deleting all sinks from $G^{i-1}$ induced by $\gamma$. Define $\Sink_i(\gamma)$ be the set of all sinks of the restriction of $\gamma$ to $G^{i-1}$. For each $i$, let $\sink_i(\gamma)=|\Sink_i(\gamma)|$.
The \textbf{type} of $\gamma$ is the sequence $\lambda$ of positive integers such that \[\lambda_i=\sum_{v\in \Sink_i(\gamma)}w(v)\] for all $i$ such that $Sink_i(\gamma) \neq \varnothing$. In particular, if we rearrange the terms of $\lambda$ in non-increasing order, then we get an allowable partition of $(G,\omega)$. \end{definition}
\begin{definition}\label{def:standard} Let $S$ be an $(\ell+1)$-step sink map of $(G,\omega)$, and let $\gamma$ be an acyclic orientation of $G$. We say that $S$ is in \textbf{$\gamma$-standard form} if for all $1\leq i\leq\ell$, \[S(v)_i=\begin{cases} \omega(v) & \text{ if } v\in \Sink_i(\gamma) \\ \varnothing & \text{ if } v\notin \Sink_i(\gamma), \end{cases}\] and $S(v)_{\ell+1}\neq \varnothing$ if and only if $v\in\Sink_{\ell+1}(\gamma)$. \end{definition}
Note that if $S$ is in $\gamma$-standard form, then $S$ is $\gamma$-admissible. Now we proceed with our final lemmas.
\begin{lemma} \label{maximal} Let $(G,\omega)$ be a set-weighted graph with $n$ vertices and total weight $d$. Let \\ $\mu=(\mu_1,\ldots,\mu_\ell)$ be a partition. Let $\gamma$ be an acyclic orientation of $G$, and let $\lambda$ be the type of $\gamma$. Then \begin{itemize}
\item[(a)] If $\mu_i=\lambda_i$ for all $1\leq i\leq\ell$, then there exists exactly one $\gamma$-admissible $\ell$-step weight map on $(G,\omega)$ with $\wts(G,\gamma,S)=(\mu_1,\ldots,\mu_\ell)$, namely the map $S$ such that for all $1\leq i\leq\ell$,
\[S(v)_i=\begin{cases}
\omega(v) & \text{ if } v\in \Sink_i(\gamma) \\
\varnothing & \text{ if } v\notin \Sink_i(\gamma).
\end{cases}\]
\item[(b)] If there exists some $1\leq i\leq\ell$ such that $\mu_1+\cdots+\mu_i>\lambda_1+\cdots+\lambda_i$, then there does not exist a $\gamma$-admissible $\ell$-step weight map on $(G,\omega)$ such that $\wts(G,S)=(\mu_1,\ldots,\mu_\ell)$.
\item[(c)] Let $\mu$ be a maximal partition in $(G,\omega)$, and let $S$ be an $(\ell+1)$-step weight map on $(G,\omega)$ with $\wts(G,\gamma,S)=(\mu_1,\ldots,\mu_\ell,j)$ for some $j$. Then $S$ is $\gamma$-admissible if and only if $S$ is in $\gamma$-standard form. \end{itemize} \end{lemma}
\begin{proof}
It is straightforward to verify (a). For (b), assume to the contrary that there exists a $\gamma$-admissible $\ell$-step weight map $S$ such that $\wts(G,\gamma,S)=(\mu_1,\ldots,\mu_\ell)$. Then since $S$ is $\gamma$-admissible, we have $\mu_1+\cdots+\mu_i\leq \lambda_1+\cdots+\lambda_i$, as otherwise there is not enough weight among the corresponding vertices of $G$ to build $S$, giving a contradiction.
For (c), it follows from the definition that if $S$ is in $\gamma$-standard form, then $S$ is $\gamma$-admissible. Conversely, assume that $S$ is $\gamma$-admissible. Since $\mu$ is maximal, $\mu$ partially dominates the partition obtained by sorting the parts of $\lambda$ in non-decreasing order. It is easy to verify that then $\mu$ also partially dominates $\lambda$ as an unordered integer sequence. Then by part (b), it is the case that $\mu_i=\lambda_i$ for all $1\leq i\leq\ell$, and then by part (a) it follows that $S$ is in $\gamma$-standard form.
\end{proof}
\begin{lemma}\label{lem:sinkpath}
Let $G$ be a graph and $\gamma$ an acyclic orientation of $G$. Then for every $v \in V(G)$, $v \in \Sink_i(\gamma)$ if and only if the length of the longest directed path in $\gamma$ starting at $v$ contains $i$ vertices.
\end{lemma}
\begin{proof}
The proof is by induction on $i$. It is straightforward to verify the claim for $i=1$.
For the inductive step, assume the claim holds for all positive integers less than or equal to a fixed $k$. Using the notation of Definition \ref{def:sinkseq}, suppose that $v \in \Sink_{k+1}(\gamma)$, so $v$ is a sink of the restriction of $\gamma$ to $G^{k}$. Note that each step in the construction moving from $G^i$ to $G^{i+1}$ for $i \in \{0, \dots, k-1\}$ removes the last vertex of each directed path starting at $v$ that is present in $G^i$, and no other vertices along these paths. Since $v$ has not been deleted, it follows that at least one such directed path contains at least $k$ vertices in addition to $v$, so has length at least $k+1$.
On the other hand, if there was a directed path starting at $v$ in $\gamma$ of length at least $k+2$, then $v$ would not be a sink of $G^k$, since it would have an outgoing edge remaining. Therefore, the longest directed path in $\gamma$ starting at $v$ contains $k+1$ vertices.
Conversely, it is easy to check that if the longest directed path in $\gamma$ starting at $v$ contains $i$ vertices, then $v \in \Sink_i(\gamma)$, since as noted above the process defining the sink sets removes exactly one vertex from each such directed path at each step.
\end{proof}
We will need one more tool, which is a set-weighted version of Theorem \ref{thm:stanley3.3}. The original formulation of this theorem was proved by the first author and Spirkl for vertex-weighted graphs, but it is straightforward to extend to set-weighted graphs, and it is presented here using the terminology developed so far.
\begin{theorem}[Theorem 8, \cite{delcon}]\label{one-level-sink}
Let $(G,\omega)$ be a vertex-weighted graph with $n$ vertices and total weight $d$. Then \[\sigma_j\bp{X_{(G,\omega)}}=(-1)^{d-n}\sum_{\wts(\gamma,S)=(j)}(-1)^{j-\sink_1(\gamma)},\] where the sum ranges over all ordered pairs consisting of an acyclic orientation $\gamma$ of $G$ and a $\gamma$-admissible one-step weight map $S$ of $G$ such that $\wts(G,\gamma,S)=(j)$.
\end{theorem}
We are now ready to prove the main theorem.
\begin{theorem}\label{thm:main} Let $(G,\omega)$ be a set-weighted graph with $n$ vertices and total weight $d$. Suppose that $X_{(G,\omega)}=\sum_{\lambda\vdash d}c_{\lambda}e_{\lambda}$. Let $\mu=(\mu_1,\ldots,\mu_\ell)\vdash r\leq d$ be a maximal partition in $(G,\omega)$. Fix $0\leq j\leq d-r$, where we can choose $j=0$ only when $r=d$. Then \begin{equation} \label{main-thm}
\sigma_{\mu,j}\bp{X_{(G,\omega)}}=(-1)^{d-n}\sum_{\substack{\wts(\gamma,S)=(\mu_1,\ldots,\mu_\ell,j) \\ S\text{ admissible}}}(-1)^{|(\mu,j)|-\sum_{i=1}^{\ell+1}\sink_i(\gamma)}, \end{equation} summed over all ordered pairs consisting of an acyclic orientation $\gamma$ of $G$, and a $\gamma$-admissible $(\ell+1)$-step weight map $S$ of $G$ such that $\wts(G, \gamma,S)=(\mu_1,\ldots,\mu_\ell,j)$. \end{theorem}
\textbf{Note.} We assume that all orientations $\gamma$ occurring in sums within the proof are acyclic unless otherwise stated.
\begin{proof}
The proof will be by induction on the number of non-edges of $(G, \omega)$.
\large{\textbf{Base Case}}
It suffices to consider the case when $G$ is a complete graph, since any graph with multi-edges has the same chromatic symmetric function as the same graph with each multi-edge replaced by a single edge.
Let $(G,\omega)$ be a complete set-weighted graph, with vertices labelled $v_1, \dots, v_n$ such that their integer weights satisfy $w(v_1)\geq w(v_2)\geq\cdots\geq w(v_n)$. Note that $X_{(G,\omega)}=\widetilde{m}_\lambda$, and that $\lambda=(w(v_1),\ldots,w(v_n))\vdash d$ is an allowable partition of $(G,\omega)$. Since $\mu$ is maximal, $\mu$ partially dominates $\lambda$. There are two ways this can happen.
First suppose there exists some $i \in \{1,\dots,\ell\}$ such that $\mu_1+\ldots+\mu_i>w(v_1)+\cdots+w(v_i)$. Let $\nu\vdash d$ such that $\nu'$ has the form $(\mu_1,\ldots,\mu_\ell,j,\ldots)$. Then we know that $\lambda$ does not dominate $\nu'$, or equivalently $\nu$ does not dominate $\lambda'$. Hence by Lemma \ref{lem:edom} we have $[e_\nu]\widetilde{m}_\lambda = [e_\nu]m_\lambda=0$, so the left-hand side of \eqref{main-thm} in this case is \[\sigma_{\mu,j}\bp{X_{(G,\omega)}}=0.\]
Likewise, we claim that there does not exist an acyclic orientation $\gamma$ and a $\gamma$-admissible $(\ell+1)$-step weight map $S$ such that $\wts(G, \gamma,S)=(\mu_1,\ldots,\mu_\ell,j)$. Indeed, let $\gamma$ be any acyclic orientation, and let $\nu$ be the type of $\gamma$. Since $G$ is a complete graph, $\nu$ is a permutation of $\lambda$. Since $w(v_1)\geq\cdots\geq w(v_n)$, we must have \[\nu_1+\cdots+\nu_i\leq w(v_1)+\cdots+w(v_i)<\mu_1+\cdots+\mu_i.\] Then by Lemma \ref{maximal}, there does not exist a $\gamma$-admissible $\ell$-step weight map with weight sequence $(\mu_1,\ldots,\mu_\ell)$, so it follows that there does not exist a $\gamma$-admissible $(\ell+1)$-step weight map with weight sequence $(\mu_1,\ldots,\mu_\ell,j)$. Therefore, the right-hand side of \eqref{main-thm} is $0$, and \eqref{main-thm} holds in this case.
Therefore, since $\mu$ partially dominates $\lambda$ we may assume that $\mu_i = w(v_i)$ for each $i \in \{1,\dots,\ell\}$. First, we simplify the left-hand side of \eqref{main-thm}. Recall that for each $a\in\mathbb{N}$, $n_a$ denotes the number of times that $a$ occurs as a part of $\lambda$. Then \[\sigma_{\mu,j}\bp{X_{(G,\omega)}}=\sigma_{\mu,j}(\widetilde{m}_\lambda)=\sigma_{\mu,j}\bp{\lrp{\ts{\prod_{a=1}^{\infty}n_a!}}m_\lambda}=\lrp{\ts{\prod_{a=1}^{\infty}n_a!}}\cdot \sigma_{\mu,j}(m_\lambda).\] Then since $\mu_i=w(v_i)$ for all $i \in \{1,\dots,\ell\}$, we can apply Lemma \ref{base-case-induction} repeatedly and obtain
\begin{align*}
\sigma_{\mu,j}\bp{X_{(G,\omega)}} &= \lrp{\ts{\prod_{a=1}^{\infty}n_a!}}\cdot \sigma_{\mu,j}(m_\lambda) \\
&= \lrp{\ts{\prod_{a=1}^{\infty}n_a!}}\cdot \sigma_j\bp{m_{(w(v_{\ell+1}),\ldots,w(v_n))}} \\
&= \frac{\prod_{a=1}^{\infty}n_a!}{\prod_{a=1}^{\infty}n'_a!}\cdot \sigma_j\bp{\widetilde{m}_{(w(v_{\ell+1}),\ldots,w(v_n))}}, \end{align*} where for each $a\in\mathbb{N}$, $n'_a$ is the number of times that $a$ occurs as a part of the partition $\bp{w(v_{\ell+1}),\ldots,w(v_n)}$.
Now we may use Theorem \ref{one-level-sink} to evaluate $\sigma_j\bp{\widetilde{m}_{w(v_{\ell+1}),\ldots,w(v_n)}}$. Let $(G',\omega')$ be the complete graph formed by deleting the vertices $v_1, \dots, v_{\ell}$ from $(G,\omega)$, so $(G',\omega')$ is the complete graph with vertex set $\{v_{\ell+1},\ldots,v_{n}\}$ and vertex set weights $\omega'(v_i) = \omega(v_i)$ for $i \in \{\ell+1,\dots,n\}$. Note that then $w'(v_i)=w(v_i)$ for all $i \in \{\ell+1,\dots,n\}$.
Let $\mathcal{S}'$ be the set of all ordered pairs $(\gamma,S)$ such that $\gamma$ is an acyclic orientation of $G'$, and $S$ is a $\gamma$-admissible one-step weight map with weight sequence $(j)$. Note that $\sink_1(G',\gamma)=1$ for any such $\gamma$ since $G'$ is a complete graph.
Then by applying Theorem \ref{one-level-sink} we have \begin{align*} \sigma_{\mu,j}\bp{X_{(G,\omega)}} &= \frac{\prod_{a=1}^{\infty}n_a!}{\prod_{a=1}^{\infty}n'_a!}\cdot (-1)^{w(v_{\ell+1})+\cdots+w(v_n)-n+\ell}\sum_{\wts(G',\gamma,S)=(j)}(-1)^{j-1} \\ &= \frac{\prod_{a=1}^{\infty}n_a!}{\prod_{a=1}^{\infty}n'_a!}\cdot
(-1)^{w(v_{\ell+1})+\cdots+w(v_n)-n+\ell+j-1}\cdot |\mathcal{S}'|. \end{align*}
Next we will simplify the right-hand side of \eqref{main-thm} and show that it is equal to the above. By Lemma \ref{maximal}, it suffices to consider only the ordered pairs $(\gamma, S)$ in which $\gamma$ is an acyclic orientation of $G$ whose type has the form $(\mu_1,\ldots,\mu_\ell,\ldots)$. Therefore, let $\mathcal{S}$ be the set of all ordered pairs $(\gamma,S)$ such that $\gamma$ is such an acyclic orientation, and $S$ is a $\gamma$-admissible $(\ell+1)$-step weight map with weight sequence $(\mu_1,\ldots,\mu_\ell,j)$.
Then the right-hand side of \eqref{main-thm} becomes
\[(-1)^{d-n}\sum_{(\gamma,S)\in \mathcal{S}}(-1)^{j+\mu_1+\cdots+\mu_\ell-\ell-1}=(-1)^{d-n+j+\mu_1+\cdots+\mu_\ell-\ell-1}\cdot |\mathcal{S}| ,\] where we have used the fact that $\sum_{i=1}^{\ell+1}\sink_i(\gamma)=\ell+1$ since $G$ is a complete graph.
To construct an element $(\gamma,S)\in \mathcal{S}$, we first pick vertices $v_{k_1},\ldots,v_{k_\ell}$ such that $\Sink_i(\gamma)=\{v_{k_i}\}$ and $w(v_{k_i})=\mu_i$ for all $i \in \{1,\dots,\ell\}$. This can be done in $\bp{\prod_{a=1}^{\infty}n_a!}/\bp{\prod_{a=1}^{\infty}n'_a!}$ ways, where $n_a$ and $n'_a$ are defined as above when considering the left-hand side of \eqref{main-thm}. We then must construct $S$ such that for $i \in \{1,\dots,\ell\}$, we have $S(v_{k_i})_i = \omega(v_{k_i})$ and $S(v)_i = \varnothing$ if $v \neq v_{k_i}$.
Then, since we need $\wts(\gamma,S)=(\mu_1,\ldots,\mu_\ell,j)$, to finish constructing $(\gamma, S)$ we require an acyclic orientation $\gamma'$ of $G' = G \backslash \{v_{k_1},\ldots,v_{k_\ell}\}$ and a $\gamma'$-admissible one-step weight map on $G'$ with sink weight sequence $(j)$. By definition this may be chosen in $|\mathcal{S}'|$ ways, where $\mathcal{S}'$ is defined above. This shows that
\[|\mathcal{S}|=\frac{\prod_{a=1}^{\infty}n_a!}{\prod_{a=1}^{\infty}n'_a!}\cdot|\mathcal{S}'|.\]
Therefore, the right-hand side of \eqref{main-thm} becomes \begin{align*}
(-1)^{d-n+j+\mu_1+\cdots+\mu_\ell-\ell-1}\cdot\frac{\prod_{a=1}^{\infty}n_a!}{\prod_{a=1}^{\infty}n'_a!}\cdot|\mathcal{S}'| &= (-1)^{d-n+j-\mu_1-\cdots-\mu_\ell+\ell-1}\cdot\frac{\prod_{a=1}^{\infty}n_a!}{\prod_{a=1}^{\infty}n'_a!}\cdot|\mathcal{S}'| \\
&= (-1)^{w(v_{\ell+1})+\cdots+w(v_n)-n+\ell+j-1}\cdot\frac{\prod_{a=1}^{\infty}n_a!}{\prod_{a=1}^{\infty}n'_a!}\cdot|\mathcal{S}'| \\
&= \sigma_{\mu,j}\bp{X_{(G,\omega)}}, \end{align*} which establishes the base case of the theorem.
\large{\textbf{Inductive Step}}
Let $(G,\omega)$ be a set-weighted graph with $g\geq 1$ non-edges, and assume by induction that \eqref{main-thm} holds for all set-weighted graphs with fewer than $g$ non-edges.
Let $e=v_1v_2\notin E(G)$. We may assume $e$ is not a loop, as otherwise both sides of \eqref{main-thm} are equal to $0$, and the result holds. For clarity, we will sometimes suppress explicit mention of $\omega$ in the remainder of this proof, but whenever a graph is mentioned, it is assumed to be a set-weighted graph, and the set-weighting function will be apparent in terms of $\omega$.
Let $G+e$ be the (set-weighted) graph such that $V(G+e)=V(G)$ and $E(G+e)=E(G)\cup\{e\}$, and let $G/e = (G+e)/e$. We will use the above terminology, so when $e$ is a nonloop edge, $v^* \in V(G/e)$ is the resulting vertex after contracting $v_1$ and $v_2$.
It is easy to see that both $G+e$ and $G/e$ have fewer than $g$ non-edges. In order to apply the inductive hypothesis, we would like to show that if $\mu$ is a maximal partition of $G$, then $\mu$ is also a maximal partition of both $G+e$ and $G/e$.
We first check that $\mu$ is a maximal partition for $G+e$. Let $\nu$ be an allowable partition in $G+e$. Then there exists some $W\subseteq V(G+e)=V(G)$ and a stable partition $(S_1,\ldots,S_k)$ of $W$ such that $w(S_i)=\nu_i$ for all $i \in \{1, \dots, \ell(\nu)\}$. For each such $i$, since $S_i$ is stable in $G+e$, $S_i$ is also stable in $G$, so $\nu$ is also an allowable partition in $G$. Since $\mu$ is a maximal partition in $G$, $\mu$ partially dominates $\nu$. Since the choice of $\nu$ was arbitrary, we conclude that $\mu$ is maximal in $G+e$.
Next, we check that $\mu$ is also a maximal partition for $G/e$. Let $\nu$ be an allowable partition in $G/e$. Then there exists some $W\subseteq V(G/e)$ and a stable partition $(S_1,\ldots,S_k)$ of $W$ such that $w(S_i)=\nu_i$ for all $i \in \{1,\dots,\ell(\nu)\}$. If $v^*\notin S_i$ for some such $i$, then each $S_i$ is a stable set in the graph $G$, so that $(S_1,\ldots,S_k)$ is a stable partition of a subset of $V(G)$, and in this case, $\nu$ is an allowable partition in $G$. Then since $\mu$ is maximal in $G$, we have that $\mu$ partially dominates $\nu$.
Suppose that instead $v^*\in S_{i_0}$ for some $i_0 \in \{1,\dots,\ell(\nu)\}$. Let $S'_{i_0}=(S_{i_0}\setminus\{v^*\})\cup\{v_1,v_2\}$ be a set of vertices in $G$. Since $v_1v_2\notin E(G)$, $S'_{i_0}$ is stable in $G$, and for $i\neq i_0$, it is clear that $S_i$ is stable in $G$. It follows that $((S_1,\dots,S_k) \backslash S_{i_0}) \cup S'_{i_0}$ is a stable partition of some subset of vertices of $G$. Since $w(S_{i_0})=w(S'_{i_0})$, it follows that $\nu$ is an allowable partition in $G$, and that $\mu$ partially dominates $\nu$.
Since $\mu$ partially dominates $\nu$ in both cases, it follows that $\mu$ is a maximal partition in $G/e$.
Therefore, we can apply the inductive hypothesis on $G+e$ and $G/e$. We have \[ \sigma_{\mu,j}\bp{X_{G+e}}=(-1)^{d-n}\sum_{\substack{\wts(G+e,\gamma,S)=(\mu_1,\ldots,\mu_\ell,j) \\ S\text{ admissible}}}(-1)^{j+\sum_{i=1}^{\ell}\mu_i-\sum_{i=1}^{\ell+1}\sink_i(\gamma)} \] and \[ \sigma_{\mu,j}\bp{X_{G/e}}=(-1)^{d-n+1}\sum_{\substack{\wts(G/e,\gamma,S)=(\mu_1,\ldots,\mu_\ell,j) \\ S\text{ admissible}}}(-1)^{j+\sum_{i=1}^{\ell}\mu_i-\sum_{i=1}^{\ell+1}\sink_i(\gamma)}. \]
By the deletion-contraction relation of Lemma \ref{lem:delconset}, we have \[(-1)^{d-n}\sigma_{\mu,j}\bp{X_{G+e}}=(-1)^{d-n}\sigma_{\mu,j}\bp{X_{G}}-(-1)^{d-n}\sigma_{\mu,j}\bp{X_{G/e}}.\] To prove \eqref{main-thm}, it thus suffices to prove that \begin{align*}
&\quad \sum_{\substack{\wts(G,\gamma,S)=(\mu,j) \\ S\text{ admissible}}}(-1)^{j+\sum_{i=1}^{\ell}\mu_i-\sum_{i=1}^{\ell+1}\sink_i(\gamma)} \\
&= \sum_{\substack{\wts(G+e,\gamma,S)=(\mu,j) \\ S\text{ admissible}}}(-1)^{j+\sum_{i=1}^{\ell}\mu_i-\sum_{i=1}^{\ell+1}\sink_i(\gamma)}-\sum_{\substack{\wts(G/e,\gamma,S)=(\mu,j) \\ S\text{ admissible}}}(-1)^{j+\sum_{i=1}^{\ell}\mu_i-\sum_{i=1}^{\ell+1}\sink_i(\gamma)} \end{align*}
or equivalently, it suffices to show that \begin{align} &\quad \sum_{\substack{\wts(G,\gamma,S)=(\mu,j) \\ S\text{ admissible}}}(-1)^{\sum_{i=1}^{\ell+1}\sink_i(\gamma)} \nonumber \\ &=\sum_{\substack{\wts(G+e,\gamma,S)=(\mu,j) \\ S\text{ admissible}}}(-1)^{\sum_{i=1}^{\ell+1}\sink_i(\gamma)}-\sum_{\substack{\wts(G/e,\gamma,S)=(\mu,j) \\ S\text{ admissible}}}(-1)^{\sum_{i=1}^{\ell+1}\sink_i(\gamma)}. \label{induction} \end{align}
Given a set-weighted graph $H$ (with weight function suppressed), an orientation $\gamma$ on $H$ (not necessarily acyclic), and an $(\ell+1)$-step weight map $S$ of $H$, we define \[ T(H,\gamma,S)=\begin{cases} (-1)^{\sum_{i=1}^{\ell+1}\sink_i(\gamma)} & \text{ if }\gamma\text{ is acyclic and } S \text{ is }\gamma\text{-admissible} \\ 0 & \text{ otherwise.} \end{cases} \]
Hence, in order to prove \eqref{induction}, it suffices to show that \begin{equation} \label{induction-goal} \sum_{\substack{(G,\gamma,S) \\ \wts(G,S)=(\mu,j)}} T(G,\gamma,S) =\sum_{\substack{(G+e,\gamma,S) \\ \wts(G+e,S)=(\mu,j)}} T(G+e,\gamma,S) -\sum_{\substack{(G/e,\gamma,S) \\ \wts(G/e,S)=(\mu,j)}} T(G/e,\gamma,S), \end{equation} where the sums each range over all (not necessarily acyclic) orientations $\gamma$ of the corresponding graph, and all $(\ell+1)$-step weight maps $S$ of the corresponding graph, and not just the ones which are $\gamma$-admissible. This will allow us to more easily demonstrate the necessary bijections to show the equality holds.
Given an orientation $\gamma$ of $G$ and an $(\ell+1)$-step weight map $S$ of $G$, we define the following: \begin{itemize}
\item In $G+e$, let $\varphi_{v_1}(\gamma)$ be the orientation whose restriction to $G$ is $\gamma$, and the direction of $e$ is $v_1\to v_2$. Let $\varphi_{v_2}(\gamma)$ be the orientation whose restriction to $G$ is $\gamma$, and the direction of $e$ is $v_2\to v_1$. Let $\psi_{+}(S)$ be the $(\ell+1)$-step weight map on $G+e$ such that $\psi_{+}(S)=S$. Note that $\wts(G+e,\psi_+(S))=(\mu_1,\ldots,\mu_\ell,j)$.
\item In $G/e$, we let $\varphi_{v^*}(\gamma)$ be the orientation obtained by contracting $e$ in $G$.
Let $\psi_{*}(S)$ be the $(\ell+1)$-step weight map on $G/e$ such that for all $v\in V(G/e)$ and for all $i \in \{1,\dots,\ell\}$,
\[\psi_{*}(S)(v)_i=\begin{cases}
S(v)_i & \text{ if } v\neq v^* \\
S(v_1)_i\cup S(v_2)_i & \text{ if } v=v^*.
\end{cases}\]
It is easy to verify that then $\wts(G/e,\psi_*(S))=(\mu_1,\ldots,\mu_\ell,j)$. \end{itemize}
We claim that \begin{equation} \label{ind-bijection} T(G,\gamma,S) = T\bp{G+e,\varphi_{v_1}(\gamma),\psi_+(S)} + T\bp{G+e,\varphi_{v_2}(\gamma),\psi_+(S)} - T\bp{G/e,\varphi_*(\gamma),\psi_*(S)} \end{equation} for all orientations $\gamma$ of $G$ and all $(\ell+1)$-step weight maps $S$ on $G$ with $\wts(G,S)=(\mu_1,\ldots,\mu_\ell,j)$. This is sufficient to prove the result, since \begin{itemize}
\item Every orientation of either $G+e$ or $G/e$ corresponds under the inverse of an appropriate $\varphi$ to a unique orientation of $G$.
\item Every $(\ell+1)$-step weight map of either $G+e$ or $G/e$ corresponds under the inverse of an appropriate $\psi$ to a unique $(\ell+1)$-step weight map of $G$. \end{itemize} so \eqref{ind-bijection} includes every term among the sums in \eqref{induction-goal} exactly once.
We will divide into cases.
\subsection*{Case 1. $\gamma$ is not an acyclic orientation.}
In this case, each of $\varphi_{v_1}(\gamma)$, $\varphi_{v_2}(\gamma)$, and $\varphi_{*}(\gamma)$ are also not acyclic, so every term of \eqref{ind-bijection} is equal to $0$, and equality holds.
For the remainder of the proof, we may assume that $\gamma$ is acyclic.
\subsection*{Case 2. $\gamma$ has a directed path from $v_1$ to $v_2$ or from $v_2$ to $v_1$.}
Note that since $\gamma$ is acyclic, there cannot be a directed path from $v_1$ to $v_2$ and a directed path from $v_2$ to $v_1$ at the same time. Assume without loss of generality that there exists a directed path from $v_1$ to $v_2$ in $\gamma$.
In this case, neither $\varphi_{v_2}(\gamma)$ nor $\varphi_{*}(\gamma)$ is acyclic, so \[T\bp{G+e,\varphi_{v_2}(\gamma),\psi_+(S)}=0\quad\text{and}\quad T\bp{G/e,\varphi_*(\gamma),\psi_*(S)}=0.\] We then consider the orientation $\varphi_{v_1}(\gamma)$ on $G+e$. Since there is no directed path from $v_2$ to $v_1$ in $\gamma$, $\varphi_{v_1}(\gamma)$ is acyclic. We claim that $S$ is $\gamma$-admissible if and only if $\psi_+(S)$ is $\varphi_{v_1}(\gamma)$-admissible. Indeed, by Lemma \ref{maximal}(c), we have that $S$ is $\gamma$-admissible if and only if $S$ is in $\gamma$-standard form. Let $v_1\in\Sink_i(\gamma)$ and let $v_2\in\Sink_j(\gamma)$ for some $i$ and $j$. Since there is a directed path from $v_1$ to $v_2$, we must have $i>j$. It follows that $\Sink_m(\gamma)=\Sink_m(\varphi_{v_1}(\gamma))$ for all $m$. Then since $\psi_+(S)=S$, we have that $S$ is in $\gamma$-standard form if and only if $\psi_+(S)$ is in $\varphi_{v_1}(\gamma)$-standard form, which happens if and only if $\psi_+(S)$ is $\varphi_{v_1}(\gamma)$-admissible in $G+e$ by Lemma \ref{maximal}(c). This means that for every $S$, \[T(G,\gamma,S)=T\bp{G+e,\varphi_{v_1}(\gamma),\psi_+(S)}.\] Therefore, \eqref{ind-bijection} holds in this case.
For the remainder of the proof, we may then assume that there does not exist a direct path from $v_1$ to $v_2$ or from $v_2$ to $v_1$. Therefore, we may assume that $\varphi_{v_1}(\gamma)$, $\varphi_{v_2}(\gamma)$, and $\varphi_{*}(\gamma)$ are all acyclic.
\subsection*{A Brief Aside.}
We now make some observations that will be important for future cases. Note that there exist $i$ and $j$ (possibly equal) so that $v_1 \in \Sink_i(\gamma)$ and $v_2 \in \Sink_j(\gamma)$. Equivalently, by Lemma \ref{lem:sinkpath}, the number of vertices in the longest directed path starting at $v_1$ is $i$, and likewise for $v_2$ and $j$.
At this point in our proof, we can assume that $v_1$ does not lie on any directed path from $v_2$, and $v_2$ does not lie on any directed path from $v_1$, so if $v^*$ is the vertex formed from contraction in $G/e$, and $v$ denotes any vertex other than $v_1,v_2,v^*$ such that $v \in \Sink_k(\gamma)$, it is simple to check that
\begin{obs}\label{obs:sinks} \phantom{a line.}
\begin{itemize}
\item[(a)] $v^* \in \Sink_{\max(i,j)}(\varphi_{*}(\gamma))$ and $v \in \Sink_{k'}(\varphi_{*}(\gamma))$ with $k \leq k' \leq k+|i-j|$.
\item[(b)] $v_1 \in \Sink_{\max(i,j+1)}(\varphi_{1}(\gamma))$, $v_2 \in \Sink_{j}(\varphi_{1}(\gamma))$, and $v \in \Sink_{k'}(\varphi_{1}(\gamma))$ with $k \leq k' \leq \max(k,k+j+1-i)$.
\item[(c)] $v_2 \in \Sink_{\max(i+1,j)}(\varphi_{2}(\gamma))$, $v_1 \in \Sink_{i}(\varphi_{2}(\gamma))$, and $v \in \Sink_{k'}(\varphi_{2}(\gamma))$ with $k \leq k' \leq \max(k,k+i+1-j)$.
\end{itemize}
\end{obs}
In particular, the sink level of any vertex does not decrease in passing from $\gamma$ to any of $\varphi_1(\gamma), \varphi_2(\gamma),$ or $\varphi_{*}(\gamma)$, but it may increase.
Let the type of $\gamma$ be $\lambda$, the type of $\varphi_1(\gamma)$ be $\lambda^{v_1}$, the type of $\varphi_2(\gamma)$ be $\lambda^{v_2}$, and the type of $\varphi_{*}(\gamma)$ be $\lambda^{*}$. From the above observations it follows that \begin{itemize}
\item For all $k \in \mathbb{N}$ and all $a \in \{v_1,v_2,*\}$ \begin{equation}\label{eq:sinkcontract} \lambda_1^a + \dots + \lambda_k^a \leq \lambda_1 + \dots + \lambda_k \end{equation} \item If $i < j$, then \begin{equation}\label{eq:sinkvstar} \lambda_1^* + \dots + \lambda_i^* < \lambda_1 + \dots + \lambda_i. \end{equation} and analogously if $j < i$. \item If $i < j+1$, then \begin{equation}\label{eq:sinkvstar1} \lambda_1^{v_1} + \dots + \lambda_i^{v_1} < \lambda_1 + \dots + \lambda_i. \end{equation} \item If $j < i+1$, then \begin{equation}\label{eq:sinkvstar2} \lambda_1^{v_2} + \dots + \lambda_j^{v_2} < \lambda_1 + \dots + \lambda_j. \end{equation} \end{itemize}
Returning to the main proof, the remaining cases will consider different possibilities for $i$ and $j$.
\subsection*{Case 3. At least one of $i$ and $j$ is an element of $\{1,\dots,\ell\}$, and $i \neq j$.}
Assume without loss of generality that $i \in \{1,\dots,\ell\}$.
We first consider the subcase where $j>i$. In this subcase, we claim that $\psi_{*}(S)$ is not $\varphi_{*}(\gamma)$-admissible in $G/e$.
Note that $v^*\in\Sink_j(\varphi_*(\gamma))$. Let $\lambda$ be the type of $\gamma$ and let $\lambda^*$ be the type of $\varphi_*(\gamma)$. Recall that $\mu$ partially dominates $\lambda$.
If $\mu_m=\lambda_m$ for all $m \in \{1,\dots,\ell\}$, then we have $\mu_1+\cdots+\mu_i=\lambda_1+\cdots+\lambda_i>\lambda^*_1+\cdots+\lambda^*_i$ by \eqref{eq:sinkvstar}. If instead there exists some $m \in \{1,\dots,\ell\}$ such that $\mu_1+\cdots+\mu_m>\lambda_1+\cdots+\lambda_m$, then we also have $\mu_1+\cdots+\mu_m>\lambda^*_1+\cdots+\lambda^*_m$ by \eqref{eq:sinkcontract}. Then by Lemma \ref{maximal}(b), we conclude that $\psi_*(S)$ cannot be $\varphi_*(\gamma)$-admissible in $G/e$.
We also claim that $\psi_+(S)$ is not $\varphi_{v_1}(\gamma)$-admissible. Note that $v_1\in\Sink_{j+1}(\varphi_{v_1}(\gamma))$. Therefore, since $\mu$ partially dominates the type of $\varphi_{v_1}(\gamma)$, following the same steps as above using the type of $\varphi_{v_1}(\gamma)$, we conclude that $\psi_{+}(S)$ cannot be $\varphi_{v_1}(\gamma)$-admissible in $G+e$.
Thus we have demonstrated that \[T\bp{G+e,\varphi_{v_1}(\gamma),\psi_+(S)}=0\quad\text{and}\quad T\bp{G/e,\varphi_*(\gamma),\psi_*(S)}=0.\]
It remains to consider the orientation $\varphi_{v_2}(\gamma)$ on $G+e$. Proceeding exactly as in Case 2, we can show that $S$ is $\gamma$-admissible if and only if $\psi_+(S)$ is $\varphi_{v_2}(\gamma)$-admissible. Hence, \[T(G,\gamma,S)=T\bp{G+e,\varphi_{v_2}(\gamma),\psi_+(S)},\] and \eqref{ind-bijection} holds in this case.
For the case where $j<i$, we can use an identical argument to show that $\psi_{*}(S)$ is not $\varphi_{*}(\gamma)$-admissible in $G/e$, and $\psi_+(S)$ is not $\varphi_{v_2}(\gamma)$-admissible in $G+e$. It is also easy to show that $S$ is $\gamma$-admissible in $G$ if and only if $\psi_{+}(S)$ is $\varphi_{v_1}(\gamma)$-admissible in $G+e$, so \eqref{ind-bijection} holds.
\subsection*{Case 4. $i$ and $j$ are elements of $\{1,\dots,\ell\}$ with $i=j$.}
We claim that $\psi_+(S)$ is not $\varphi_{v_1}(\gamma)$-admissible in $G+e$. Let $\lambda$ be the type of $\gamma$ and let $\lambda^{v_1}$ be the type of $\varphi_{v_1}(\gamma)$.
Then by \eqref{eq:sinkcontract} and \eqref{eq:sinkvstar1}, $\lambda^{v_1}_1 + \dots + \lambda^{v_1}_i<\lambda_1 + \dots + \lambda_i$ and $\lambda^{v_1}_1+\cdots+\lambda^{v_1}_m\leq \lambda_1+\cdots+\lambda_m$ for all $m$, so by the same argument as in Case 3 we see that $\psi_+(S)$ is not $\varphi_{v_1}(\gamma)$-admissible. By a symmetrical argument, also $\psi_+(S)$ is not $\varphi_{v_2}(\gamma)$-admissible. Therefore, we have
\[T\bp{G+e,\varphi_{v_1}(\gamma),\psi_+(S)}=0\quad\text{and}\quad T\bp{G+e,\varphi_{v_2}(\gamma),\psi_+(S)}=0.\] We then claim that $S$ is $\gamma$-admissible in $G$ if and only if $\psi_*(S)$ is $\varphi_*(\gamma)$-admissible in $G/e$. By Lemma \ref{maximal}(c), $S$ is $\gamma$-admissible if and only if $S$ is in $\gamma$-standard form. By Observation \ref{obs:sinks}(a), $\Sink_i(\varphi_*(\gamma))=(\Sink_i(\gamma)\setminus\{v_1,v_2\})\cup\{v^*\}$, and $\Sink_m(\varphi_*(\gamma))=\Sink_m(\gamma)$ for all $m\neq i$, so we have that $S$ is in $\gamma$-standard form if and only if $\psi_*(S)$ is in $\varphi_{*}(\gamma)$-standard form. Using Lemma \ref{maximal}(c) again, we know that $\psi_*(S)$ is in standard form in $(G/e,\varphi_{*}(\gamma))$ if and only if $\psi_*(S)$ is $\varphi_{*}(\gamma)$-admissible in $G/e$. Furthermore, when both are admissible we have $(\sum_{k=1}^{\ell+1} \sink_k(\gamma))-1$ = $\sum_{k=1}^{\ell+1} \sink_k(\varphi_*(\gamma))$, so in any case \[T(G,\gamma,S)=-T\bp{G/e,\varphi_*(\gamma),\psi_*(S)},\] and thus \eqref{ind-bijection} holds.
From now on, we can assume that both $i$ and $j$ are larger than $\ell$.
\subsection*{Case 5. $i,j>\ell$, and it is \textit{not} the case that for all $v \in V(G)$ and $k \in \{1,\dots,\ell\}$, we have $S(v)_k=\{1,\ldots,w(v)\}$ if $v\in\Sink_k(\gamma)$ and $S(v)_k=\varnothing$ if $v\notin\Sink_k(\gamma)$.}
That is, $S$ does not satisfy the first part of the definition for being in $\gamma$-standard form.
By Lemma \ref{maximal}(c), we know that $S$ is not $\gamma$-admissible, so $T(G,\gamma,S)=0$.
Since $i,j>\ell$, neither $v_1$ nor $v_2$ lie on any directed path starting at a vertex $v \in Sink_k(\gamma)$ with $k \leq \ell$. Therefore, for every $m \in \{1,\dots,\ell\}$, $\Sink_m(\gamma)=\Sink_m(\varphi_{v_1}(\gamma))=\Sink_m(\varphi_{v_2}(\gamma))=\Sink_m(\varphi_*(\gamma),\psi_*(S))$. It follows that $\psi_+(S)$ is not in either $\varphi_{v_1}(\gamma)$-standard form or in $\varphi_{v_2}(\gamma)$-standard form, and that $\psi_*(S)$ is not in $\varphi_*(\gamma)$-standard form . Hence by Lemma \ref{maximal}(c), $\psi_+(S)$ is neither $\varphi_{v_1}(\gamma)$-admissible nor $\varphi_{v_2}(\gamma)$-admissible, and $\psi_*(S)$ is not $\varphi_*(\gamma)$-admissible, so $T\bp{G+e,\varphi_{v_1}(\gamma),\psi_+(S)}=T\bp{G+e,\varphi_{v_2}(\gamma),\psi_+(S)}=T\bp{G/e,\varphi_{*}(\gamma),\psi_*(S)}=0$, and \eqref{ind-bijection} holds.
For all remaining cases, we may thus assume that for all $k \in \{1,\dots,\ell\}$, $S(v)_k=\{1,\ldots,w(v)\}$ if $v\in\Sink_k(\gamma)$ and $S(v)_k=\varnothing$ if $v\notin\Sink_k(\gamma)$, so $S$ satisfies the first portion of the definition for being in $\gamma$-standard form.
\subsection*{Case 6. $i=j=\ell+1$.}
That is, $v_1$ and $v_2$ are both elements of $\Sink_{\ell+1}(\gamma)$. If $S(v_1)_{\ell+1}=S(v_2)_{\ell+1}=\varnothing$, then using Observation \ref{obs:sinks} it is straightforward to check that all terms in \eqref{ind-bijection} will be $0$. Thus, we assume that at least one of $S(v_1)_{\ell+1}$ and $S(v_2)_{\ell+1}$ is non-empty. We proceed with subcases.
\subsubsection*{Case 6.1. Exactly one of $S(v_1)_{\ell+1}$ and $S(v_2)_{\ell+1}$ is non-empty.}
Suppose first that $S(v_1)_{\ell+1}\neq \varnothing$ and $S(v_2)_{\ell+1}=\varnothing$. Then $S$ is not in $\gamma$-standard form, so $T(G,\gamma,S)=0$. Also, $\psi_{+}(S)$ is not in $\varphi_{v_1}(\gamma)$-standard form, so $T\bp{G+e,\varphi_{v_1}(\gamma),\psi_+(S)}=0$.
Now, if for all $v \in V(G) \setminus \{v_1,v_2\}$, we have $v\in\Sink_{\ell+1}(\gamma)$ if and only if $S(v)_{\ell+1}\neq \varnothing$, then $\psi_+(S)$ is in $\varphi_{v_2}(\gamma)$-standard form and $\psi_*(S)$ is in $\varphi_*(\gamma)$-standard form, so $\psi_+(S)$ is $\varphi_{v_2}(\gamma)$-admissible and $\psi_*(S)$ is $\varphi_{*}(\gamma)$-admissible. Otherwise, $\psi_+(S)$ is not in standard form in $\varphi_{v_2}(\gamma)$, and $\psi_*(S)$ is not in standard form in $\varphi_*(\gamma)$, and it follows that $\psi_+(S)$ is not $\varphi_{v_2}(\gamma)$-admissible and $\psi_*(S)$ is not $\varphi_{*}(\gamma)$-admissible. Either way, we have $T\bp{G+e,\varphi_{v_2}(\gamma),\psi_+(S)}=T\bp{G/e,\varphi_{*}(\gamma),\psi_*(S)}$, and \eqref{ind-bijection} holds.
A symmetrical argument shows that if $S(v_1)_{\ell+1}=\varnothing$ and $S(v_2)_{\ell+1}\neq\varnothing$, then \eqref{ind-bijection} also holds.
\subsubsection*{Case 6.2. Both $S(v_1)_{\ell+1}$ and $S(v_2)_{\ell+1}$ are non-empty.}
In this case $\psi_+(S)$ is not in either $\varphi_{v_1}(\gamma)$-standard form in or $\varphi_{v_2}(\gamma)$-standard form, so $T\bp{G+e,\varphi_{v_1}(\gamma),\psi_+(S)}=T\bp{G+e,\varphi_{v_2}(\gamma),\psi_+(S)}=0$.
If for all $v \in V(G)$, we have $S(v)_{\ell+1}\neq \varnothing$ if and only if $v\in\Sink_{\ell+1}(\gamma)$, then $S$ is in $\gamma$-standard form and thus is $\gamma$-admissible, and also $\psi_*(S)$ is $\varphi_*(\gamma)$-admissible. Since when both are admissible $(\sum_{k=1}^{\ell+1} \sink_k(\gamma))-1$ = $\sum_{k=1}^{\ell+1} \sink_k(\varphi_*(\gamma))$, we have $T(G,\gamma,S)=-T\bp{G/e,\varphi_*(\gamma),\psi_*(S)}$.
Otherwise, $S$ is not in $\gamma$-standard form and $\psi_*(S)$ is not in $\varphi_*(\gamma)$-standard form, so $S$ is not $\gamma$-admissible and $\psi_*(S)$ is not $\varphi_*(\gamma)$-admissible. Thus, $T(G,\gamma,S)=T\bp{G/e,\varphi_*(\gamma),\psi_*(S)}=0$.
In either case, \eqref{ind-bijection} holds.
\subsection*{Case 7. $\{i,j\} = \{\ell+1, k\}$ with $k > \ell+1$.}
We assume without loss of generality that $i = \ell+1$ and $j > \ell+1$. First note that if $S(v_2)_{\ell+1}\neq \varnothing$, using Observation \ref{obs:sinks} it is straightforward to verify that $S$ is not in $\gamma$-standard form, $\psi_+(S)$ is not in $\varphi_{v_1}(\gamma)$-standard form or in $\varphi_{v_2}(\gamma)$-standard form, and $\psi_*(S)$ is not in $\varphi_*(\gamma)$-standard form, so by Lemma \ref{maximal}(c) all terms in \eqref{ind-bijection} are $0$.
We may thus assume that $S(v_2)_{\ell+1}=\varnothing$, and divide into subcases.
\subsubsection*{Case 7.1. $S(v_1)_{\ell+1}=\varnothing$.}
Then $S$ is not in $\gamma$-standard form, so $T(G,\gamma,S)=0$. We also have that $\psi_+(S)$ is not in $\varphi_{v_2}(\gamma))$-standard form, so $T\bp{G+e,\varphi_{v_2}(\gamma),\psi_+(S)}=0$.
Suppose first that for all $v \in V(G) \backslash \{v_1\}$, we have $S(v)_{\ell+1}\neq \varnothing$ if and only if $v\in\Sink_{\ell+1}(\gamma)$. For $w \in \Sink_k(\gamma)$ with $k \leq \ell+1$, no directed path in $\gamma$ starting at $w$ contains $v_1$ or $v_2$, so $w \in \Sink_{k}(\varphi_*(\gamma))$ and $w \in \Sink_k(\varphi_{v_1}(\gamma))$. If $w \in \Sink_{k}(\gamma)$ with $k > \ell+1$, then by Observation \ref{obs:sinks}, $w \in \Sink_{k^{*}}(\varphi_*(\gamma))$ and $w \in \Sink_{k^{v_1}}(\varphi_{v_1}(\gamma))$ for $k^{*}, k^{v_1} \geq k$. It follows that $\psi_*(S)$ is in $\varphi_*(\gamma)$-standard form, so $\psi_*(S)$ is $\varphi_*(\gamma)$-admissible. Likewise, $\psi_+(S)$ is $\varphi_{v_1}(\gamma)$-admissible. Hence, $T\bp{G/e,\varphi_*(\gamma),\psi_*(S)}=T\bp{G+e,\varphi_{v_1}(\gamma),\psi_+(S)}$, and \eqref{ind-bijection} holds.
Otherwise, using a similar argument we see that $\psi_*(S)$ is not $\varphi_*(\gamma)$-admissible and $\psi_+(S)$ is not $\varphi_{v_1}(\gamma)$-admissible, so all terms in \eqref{ind-bijection} are $0$, and \eqref{ind-bijection} holds.
\subsubsection*{Case 7.2. $S(v_1)_{\ell+1}\neq \varnothing$.}
We see that $\psi_+(S)$ is not in $\varphi_{v_1}(\gamma)$-standard form, so $\psi_+(S)$ is not $\varphi_{v_1}(\gamma)$-admissible, and $T\bp{G+e,\varphi_{v_1}(\gamma),\psi_+(S)}=0$. Additionally, $\psi_*(S)$ is not in $\varphi_*(\gamma)$-standard form, so $T\bp{G/e,\varphi_*(\gamma),\psi_*(S)}=0$.
Using an argument analogous to that in Case 7.1, if for all $v \in V(G)$ we have $S(v)_{\ell+1}\neq \varnothing$ if and only if $v\in\Sink_{\ell+1}(\gamma)$, then $S$ is $\gamma$-admissible and $\psi_+(S)$ is $\varphi_{v_2}(\gamma)$-admissible. Otherwise, $S$ is not $\gamma$-admissible and $\psi_+(S)$ is not $\varphi_{v_2}(\gamma)$-admissible. Either way $T(G,\gamma,S)=T\bp{G+e,\varphi_{v_2}(\gamma),\psi_+(S)}$, and \eqref{ind-bijection} holds.
\subsection*{Case 8. $i > \ell+1$ and $j > \ell+1$.}
It is straightforward to check using Observation \ref{obs:sinks} that if one of $S(v_1)_{\ell+1}$ and $S(v_2)_{\ell+1}$ is non-empty, then all terms in \eqref{ind-bijection} will be $0$. We thus assume $S(v_1)_{\ell+1}=S(v_2)_{\ell+1}=\varnothing$, and so $S$ is $\gamma$-admissible.
If for all $v \in V(G) \setminus \{v_1,v_2\}$ we have $S(v)_{\ell+1}\neq \varnothing$ if and only if $v\in\Sink_{\ell+1}(\gamma)$, then using an argument analogous to that in Case 7.1, it follows that $\psi_+(S)$ is both $\varphi_{v_1}(\gamma)$-admissible and $\varphi_{v_2}(\gamma)$-admissible, and $\psi_*(S)$ is $\varphi_*(\gamma)$-admissible. Since then $\sum_{k=1}^{\ell+1} \sink_k(\gamma)$ =$\sum_{k=1}^{\ell+1} \sink_k(\varphi_{v_1}(\gamma))$ = $\sum_{k=1}^{\ell+1} \sink_k(\varphi_{v_2}(\gamma))$ = $1+\sum_{k=1}^{\ell+1} \sink_k(\varphi_*(\gamma))$, we have $$T(G,\gamma,S)=T\bp{G+e,\varphi_{v_1}(\gamma),\psi_+(S)}+T\bp{G+e,\varphi_{v_2}(\gamma),\psi_+(S)}-T\bp{G/e,\varphi_{*}(\gamma),\psi_*(S)}$$ and \eqref{ind-bijection} holds.
Otherwise, we may check that all terms in \eqref{ind-bijection} are $0$, so \eqref{ind-bijection} holds.
It is easy to check that Cases 1 through 8 include all possibilities. Thus \eqref{ind-bijection} holds for all orientations $\gamma$ on $G$ and all $(\ell+1)$-step weight maps $S$ of $G$, and we are done.
\end{proof}
\section{A Conjectured Strengthening of Theorem \ref{thm:stanley3.4}}
In this section, we conjecture a stronger version of Theorem \ref{thm:stanley3.4} for the case when $\mu$ has one part, with the expectation that a proof of this strengthening would likely extend to prove a corresponding theorem for any $\mu$.
Due to the strength of Lemma \ref{maximal} (c), in the previous section any time we deleted mini-vertices from a vertex, we could assume that we always deleted all mini-vertices except possibly in the last step.
In general, we would like to be able to consider partial deletions from a set-weighted vertex throughout the process, potentially allowing for a more general construction and theorem. We will outline one way to do so.
In this section we will first introduce the conjecture. We will then provide supporting numerical evidence that also serves to illustrate the idea. In later sections, we will demonstrate the particular relevance of the conjecture to unweighted claw-free graphs, and provide supporting theoretical evidence in the form of proofs of some nontrivial special cases.
\subsection{Introducing the Conjecture}
First, we will begin with some definitions that extend previously given ones.
\begin{definition} Let $\ell\in\mathbb{N}$ and let $(G,\omega)$ be a set-weighted graph. A \textbf{generalized $\ell$-step weight map} of $(G,\omega)$ is a function $S:\mathcal{P}(V(G))\to (\mathcal{P}(\mathbb{N}))^{\ell}$ such that for all nonempty $A \subseteq V(G)$, we have \[\bigsqcup_{i=1}^{\ell}S(A)_i\subseteq \omega(A)\] where $\omega(A) = \bigsqcup_{v \in A} \omega(v)$.
We define the \textbf{generalized $\ell$-step weight sequence} of a generalized $\ell$-step weight map
$S$ to be $\wts(G,S)=(|S_1|,\ldots,|S_\ell|)$, where for all $i \in \{1,\dots,l\}$, we have
\[|S_i|=\sum_{A\subseteq V(G)}|S(A)_i|.\]
\end{definition}
Thus, now instead of only each vertex having its own map, every possible set of vertices may have its own map.
Our conjecture is for $\mu$ with one part, so we are looking at the interplay between acyclic orientations and generalized $2$-step weight maps.
\begin{definition}\label{def:drop} Let $(G,\omega)$ be a set-weighted graph, and let $\gamma$ be an acyclic orientation of $G$.
For each vertex $v \in V(G)$, let $\omega(v)_j$ be the $j^{th}$-smallest element of $\omega(v)$ for $j \in \{1,\dots,w(v)\}$. Define $C_v$ to be the oriented (cyclic) graph with vertex set $\omega(v)$ and directed edge set $ \bigsqcup_{j=1}^{w(v)-1} \{\omega(v)_j\omega(v)_{j+1}\} \sqcup \{\omega(v)_{w(v)}\omega(v)_1\}$.
A generalized $2$-step sink map $S$ of $(G,\omega)$ is \textbf{$\gamma$-admissible} if \begin{itemize}
\item[(1)] Across all nonempty $A \subseteq V(G)$, $S(A)_1\neq \varnothing$ if and only if $A=\{v\}$ for some $v\in\Sink_1(\gamma)$.
In this case, let $D=\big\{v\in \Sink_1(\gamma):S(\{v\})_1=\omega(v)\big\}$ (the set of sinks of $\gamma$ which are annihilated by $S$).
Let $\Sink_2'(\gamma,S)=\Sink_1(\gamma|_{V(G)-D})\setminus\Sink_1(\gamma)$. Note that $\Sink_2'(\gamma,S)\subseteq\Sink_2(\gamma)$ is the set of second-level sinks of $\gamma$ that are ``uncovered" by the removal of the vertices of $D$.
Let $M_1, \dots, M_k$ be the connected components of $G[D \cup \Sink_2'(\gamma,S)]$.
\item[(2)] For all $v\in\Sink_1(\gamma)\setminus D$, $S(\{v\})_2$ is the set of sinks in $C_v-S(\{v\})_1$.
\item[(3)] For each $i \in \{1,\dots,k\}$, there is exactly one nonempty $B_i\subseteq(\Sink_2'(\gamma,S) \cap M_i)$ such that $S(B_i)_2\neq \varnothing$. Define \[D_{B_i}=\{v\in (D \cap M_i):v\text{ is adjacent with some vertex in }B_i\}.\] Let $T_i = \omega(B_i)$ if $w(B_i) \leq w(D_{B_i})$, and otherwise let $T_i$ consist of the $w(D_{B_i})$ smallest elements of $\omega(B_i)$. Then $$\varnothing \subsetneq S(B_i)_2\subseteq T_i$$.
\item[(4)] For all remaining $A\subseteq V(G)$ such that $S(A)_2$ has not yet been defined, $S(A)_2=\varnothing$. \end{itemize} \end{definition}
There are two substantial additions made here to the definition of $\gamma$-admissibility.
First, each weighted vertex is effectively replaced by a directed cycle for the purpose of determining second-level sinks ``within" a weighted vertex; that this is the correct way to do so is implied by the proof of Theorem \ref{no-edge-thm} in Section 4.3.
Second, when vertices are entirely removed as first-level sinks, we give a new process for choosing second-level sinks from among the newly uncovered vertices. Not only must we choose a subset of the revealed vertices, but we may see a \emph{weight-drop} phenomenon where if the weight of this subset is greater than the weight of its annihilated neighbors, we must drop the weight permitted for second-level sinks to match the smaller value. The method of choosing a subset of revealed vertices is suggested by numerical data and may be easily seen to agree with Theorem \ref{thm:main} where both apply. That the weight-drop phenomenon is necessary is implied by the proof of Theorem \ref{one-edge-thm} in Section 4.3.
For the original definition of maximal $\mu$ with respect to $(G,\omega)$ used in Theorem \ref{thm:main}, we never required this complexity. However, our stronger conjecture would allow for a broader range of viable $\mu$ in the case where $\mu$ has one part (so is an integer).
\begin{definition}\label{def:sallow}
Given $S,T$ disjoint subsets of the vertex set of a set-weighted graph $(G,\omega)$, let $M_1, \dots, M_k$ be the connected components of $G[S \cup T]$. For $i \in \{1,\dots,k\}$, let $S_i = S \cap V(M_i)$ and $T_i = T \cap V(M_i)$.
Let $\mu$ be a positive integer. We say $\mu$ is \textbf{s-allowable} in $(G,\omega)$ if for all disjoint stable sets $S, T \subseteq V(G)$ such that \begin{itemize}
\item For all $v\in S$ there exists some $u\in T$ with $uv\in E(G)$, and
\item For all $u\in T$ there exists some $v\in S$ with $uv\in E(G)$, \end{itemize} it holds that for any choice of positive integers $\mu_1,\dots,\mu_k$ such that $\mu_1+\dots+\mu_k = \mu$, we have that for all $i \in \{1,\dots,k\}$, $\mu_i\leq\min\{w(S_i),w(T_i)\}$ or $\mu_i\geq\max\{w(S_i),w(T_i)\}$.
\end{definition} In particular, $\mu$ is $s$-allowable if for all such $S$ and $T$, it is the case that $\mu \leq \min\{w(S),w(T)\}$ or $\mu \geq \max\{w(S),w(T)\}$; however, it is possible for $\mu$ to be $s$-allowable without satisfying this stricter condition.
Note that under the previous definition, a single integer $\mu$ is maximal if and only if $\mu$ is greater than or equal to the weight of the largest stable set of $(G,\omega)$. $s$-allowability is a much more flexible condition; for instance, $\mu$ smaller than the size of the smallest vertex weight of $(G,\omega)$ is always $s$-allowable, and in particular $\mu=1$ is always $s$-allowable. On the other end, it is possible for $\mu$ smaller than the weight of the largest stable set of $(G,\omega)$ to be $s$-allowable if this largest stable set does not meet the criteria above with respect to some other disjoint stable set of $(G,\omega)$.
We require one more piece, which is a generalization of the sign of a pair $(\gamma, S)$.
\begin{definition} Let $(G,\omega)$ be a set-weighted graph. Let $\gamma$ be an acyclic orientation on $G$, and let $S$ be a $\gamma$-admissible $2$-step sink map on $G$. Let $I=\{A\subseteq V(G):\text{at least one of }S(A)_i\text{ is non-empty for }i\in\{1,2\}\}$. For each $A\in I$, let $i_A$ be the smallest index $i$ such that $S(A)_{i}\neq \varnothing$. We define the \textbf{sign} of $(\gamma,S)$ to be
\[\sgn(\gamma,S):= (-1)^{\sum_{A\in I}|S(A)_{i_A}|-|A|}.\] \end{definition}
Thus, the sign generated by a given $A$ corresponds to the parity of the number of unused mini-vertices of $A$. It is straightforward to verify that this definition of the sign of $(\gamma,S)$ agrees with that in Theorem \ref{thm:main} in the cases where $A$ is a single vertex. We now state our main conjecture.
\begin{conj} \label{conjecture} Let $(G,\omega)$ be a set-weighted graph with $n$ vertices and total weight $d$. Write $X_{(G,\omega)}=\sum_{\lambda\vdash d}c_{\lambda}e_{\lambda}$. Let $\mu \leq d$ be an integer (viewed as a partition with a single part) that is $s$-allowable in $(G,\omega)$. Fix $j \in \{0, \dots, d-\mu\}$, where we can have $j=0$ only when $\mu=d$. Then \begin{equation} \label{conjecture-equation} \sigma_{\mu,j}\bp{X_{(G,\omega)}}=(-1)^{d-n}\sum_{\substack{\swts(\gamma,S)=(\mu,j) \\ S\text{ admissible}}}\sgn(\gamma,S), \end{equation} summed over all acyclic orientations $\gamma$ of $G$ and all $\gamma$-admissible generalized $2$-step weight maps $S$ of $G$ such that $\wts(\gamma,S)=(\mu,j)$. \end{conj}
We now provide a concrete example to illustrate clearly what Conjecture \ref{conjecture} claims.
Consider $P_{5,7,5}$, a set-weighted three-vertex path with vertices $v_1,v_2,v_3$, edges $v_1v_2$ and $v_2v_3$, $\omega(v_1) = \{1,\dots,5\}$, $\omega(v_2) = \{6,\dots,12\}$, and $\omega(v_3) = \{13,\dots,17\}$. It is easy to verify using the $p$-basis expansion of vertex-weighted chromatic symmetric functions \cite[Lemma 3]{delcon} that $X_{(P_{5,7,5},\, \omega)} = p_{755}-2p_{(12)5}+p_{(17)}$ (where integers of more than one digit are enclosed in parentheses for clarity). Using SageMath to convert this to the $e$-basis, we can determine $\sigma_{\mu,j}(X_{(P_{5,7,5},\, \omega)})$ for any desired integers $\mu \geq 1$ and $j \geq 0$.
\begin{figure}
\caption{The graph $P_{5,7,5}$ with vertex weights given.}
\label{fig:p232}
\end{figure}
For example, we may compute that $\sigma_{7,3}(X_{(P_{5,7,5},\, \omega)}) = \sigma_{7,3}(20e_{6431111}+20e_{6521111}+70e_{7331111}+140e_{7421111}-210e_{8321111}-105e_{9221111}) = -65$.
Note that the only pairs $(S,T)$ of stable sets satisfying the conditions outlined in Definition \ref{def:sallow} on $s$-allowability are $\{S,T\} = \{\{v_i\},\{v_2\}\}$ for $i \in \{1,3\}$ and $\{S,T\} = \{\{v_1,v_3\},\{v_2\}\}$, and all of these cases result in $G[S \sqcup T]$ connected. In the former cases $\{|S|,|T|\} = \{5,7\}$ and in the latter case $\{|S|,|T|\} = \{7,10\}$, so a positive integer $\mu$ is $s$-allowable for $P_{5,7,5}$ if and only if $\mu \notin \{6,8,9\}$. In particular, $\mu = 7$ is $s$-allowable.
Now, we determine what Conjecture \ref{conjecture} predicts for the value of $\sigma_{7,3}(X_{(P_{5,7,5},\, \omega)})$. Note that this graph has weight $17$ and $3$ vertices, so our outer sign is $(-1)^{17-3} = 1$, so Conjecture \ref{conjecture} predicts that \begin{equation}\label{eq:sigma63}
\sum_{\substack{\swts(\gamma,S)=(3,2) \\ S\text{ admissible}}}\sgn(\gamma,S) = \sigma_{7,3}(X_{(P_{5,7,5},\, \omega)}) = -65. \end{equation}
There are four acyclic orientations of $P_{5,7,5}$; three have unique sinks, and one has two sinks. The two orientations $\gamma_1$ and $\gamma_3$ with unique sink $v_1$ and $v_3$ do not admit any admissible $S$, as $w(\Sink_1(\gamma_1)) = w(\Sink_1(\gamma_3)) = 5$, and we require $\sum_{v \in V(P_{5,7,5})} |S_1(v)| = 7$.
Of the other orientations, let us first consider the orientation $\gamma_{1,3}$ in which $v_1$ and $v_3$ are both sinks. Then we must have $|S_1(v_1) \cup S_1(v_3)| = 7$. We consider all possible ways this can occur: \begin{itemize}
\item $|S_1(v_1)| = 2$ and $|S_1(v_3)| = 5$. Then the only $A \subseteq V(P_{5,7,5})$ that may have $S_2(A) \neq \varnothing$ is $A = \{v_1\}$. Furthermore, $S_2(A)$ must be a subset of the sinks of a directed five-vertex cycle with two vertices deleted. However, there are at most two sinks in such a graph, contradicting that we require $|S_2| = 3$. Thus, in this case no valid $S$ is possible.
\item $|S_1(v_1)| = 5$ and $|S_1(v_3)| = 2$. This is identical to the above case.
\item $|S_1(v_1)| = 3$ and $|S_1(v_3)| = 4$. Then we require $S_2(v_3) = \omega(v_3) \setminus S_1(v_3)$ and $|S_2(v_1)| = 2$. Furthermore, if $C_{v_1}$ is the directed cycle with vertex set $\{1,2,3,4,5\}$ and edge set \\ $\{1 \rightarrow 2, 2 \rightarrow 3, 3 \rightarrow 4, 4 \rightarrow 5, 5 \rightarrow 1\}$, $S_2(v_1)$ must be a subset of the sinks of $C_{v_1} \setminus S_1(v_1)$. We split into subcases based on $S_1(v_1)$.
\begin{itemize}
\item $S_1(v_1)$ consists of three consecutive vertices of $C_{v_1}$. Then there is only one sink in $C_{v_1} \setminus S_1(v_1)$, so no valid $S$ is possible.
\item In all other cases, there are exactly two sinks of $C_{v_1} \setminus S_1(v_1)$, so $S_2(v_1)$ is uniquely determined.
\end{itemize}
There are $\binom{5}{4} = 5$ ways to choose $S_1(v_3)$ and $S_2(v_3)$, and it is straightforward to verify that there are $5$ valid choices of $S_1(v_1)$, giving $25$ valid choices of $S$. Furthermore, note that all valid $S$ have sign $(-1)^{3-1} \cdot (-1)^{4-1} = -1$, so this subcase contributes $-25$ to the left-hand side of \eqref{eq:sigma63}.
\item $|S_1(v_1)| = 4$ and $|S_1(v_3)| = 3$. This is identical to the previous case, contributing $-25$ to the left-hand side of \eqref{eq:sigma63}.
\end{itemize} Thus, terms in the sum with $\gamma = \gamma_{1,3}$ contribute $-50$.
It remains to consider the orientation $\gamma_2$ with unique sink $v_2$. In this case, we must choose $S_1(v_2) = \omega(v_2)$. Then $\Sink_2'(\gamma_2,S) = \{v_1,v_3\}$, so a unique nonempty subset $B$ of $\{v_1,v_3\}$ is the only $B \subseteq V(P_{5,7,5})$ such that $S_2(B) \neq \varnothing$. For each possible choice of $B$ we have $D_B = \{v_2\}$. We consider each case: \begin{itemize}
\item $B = \{v_1\}$. Then following the notation in Definition \ref{def:drop}, since $5 = w(B) \leq w(D_B) = 7$, we have $T = \omega(v_1) = \{1,2,3,4,5\}$, and we must choose $S_2(B)$ to be a three-element subset of $T$, which may be done in $\binom{5}{3} = 10$ ways. Since $|B| = 1$ and $|S_2(B)| = 2$. The sign of all such $S$ is $(-1)^{7-1}(-1)^{3-1} = 1$, so this contributes $10$ to the left-hand side of \eqref{eq:sigma63}.
\item Similar to the above, if $B = \{v_3\}$, it is easy to verify that we get a contribution of $10$ across all valid $S$.
\item If $B = \{v_1,v_3\}$, then this time since $10 = w(B) > W(D_B) = 7$, our choice of $T$ is only the seven smallest elements of $\omega(B)$, so $T = \{1,2,3,4,5,13,14\}$. We must select $S_2(B)$ to be a three-element subset of $T$, and there are $\binom{7}{3} = 35$ choices. Each such $S$ has sign $(-1)^{7-1}(-1)^{3-2} = -1$, so this contributes $-35$ to the left-hand side of \eqref{eq:sigma63}. \end{itemize}
Adding all cases together, Conjecture \ref{conjecture} correctly determines that $$\sigma_{7,3}(X_{(P_{5,7,5},\, \omega)}) = -25-25+10+10-35 = -65.$$
In addition to this example, we have tested the conjecture on a variety of weighted graphs for many choices of $s$-allowable $\mu$ and $j$.
\subsection{Conjecture \ref{conjecture} on Unweighted Claw-Free Graphs}
In the case of set-weighted graphs $(G,\omega)$ in which each vertex has weight $1$ (equivalently vertex-labelled graphs, the case most commonly studied in the literature), Conjecture \ref{conjecture} has interesting implications. We refer to these as \emph{unweighted graphs}.
Note from Definition \ref{def:sallow} that for any set-weighted graph $(G,\omega)$, \emph{every} $\mu$ is $s$-allowable if for every connected bipartite induced subgraph of $G$ with bipartition $S \sqcup T$, we have $|w(S)-w(T)| \leq 1$. In the case that $G$ is unweighted, this simplifies to saying that for all such induced subgraphs, $||S|-|T|| \leq 1$. We will show that this fact plays very nicely with claw-free graphs.
The \emph{claw} is the graph $Y$ with $V(Y) = \{1,2,3,4\}$ and $E(Y) = \{12, 13, 14\}$. A graph $G$ is said to be \emph{claw-free} if it has no induced subgraph isomorphic to the claw.
\begin{lemma}\label{lem:clawfree}
Let $G$ be a claw-free graph, and suppose that $S, T$ are disjoint stable sets of $G$ such that $G[S \sqcup T]$ is a connected bipartite graph. Then $||S|-|T|| \leq 1$.
\end{lemma}
\begin{proof}
Suppose otherwise, that without loss of generality $|S| > |T|+1$. Since $G[S \sqcup T]$ is connected, it has a spanning tree $U$. No vertex of $U$ has degree at least $3$, since then this vertex and any of three of its neighbours would form an induced claw in $G[S \sqcup T]$. Thus, every vertex of $U$ has degree equal to exactly $1$ or $2$. In this case, it is easy to verify that $U$ is a path (for instance, start at a leaf vertex and traverse the tree)\footnote{In fact, it is straightforward to show from here that $G[S \sqcup T]$ is either $U$ itself or a cycle formed by adding an edge to $U$, but we do not need this here.}. But the path $U$ in the bipartite graph $G[S \sqcup T]$ uses at most one more vertex from $S$ than from $T$, contradicting that $U$ is a spanning tree since $|S| > |T|+1$.
\end{proof}
Then it immediately follows from the above discussion that \begin{cor}
If Conjecture \ref{conjecture} holds, then we may use it to evaluate $\sigma_{\mu,j}(X_G)$ for any claw-free graph $G$ and any integers $\mu \geq 1$ and $j \in \{0,\dots,|V(G)|-\mu\}$. \end{cor}
\subsection{Theoretical Evidence Supporting Conjecture \ref{conjecture}}
As further evidence supporting the conjecture, we present proofs of two special cases.
\subsubsection{Graphs with No Edges}
\begin{definition}\label{def:samuj}
Let $a$ and $\mu$ be positive integers and let $j$ be a non-negative integer. Let $C_a$ be a directed cycle with $a$ vertices $v_1, \dots, v_a$ and directed edges $v_1v_2, \dots, v_{a-1}v_a, v_av_1$. We define
\[S_{a,\mu,j}:=\{(a,W):W\subseteq V(C_a), |W|=\mu, \text{and }C_a-W\text{ has } j\text{ components}\}.\]
Given $W \subseteq V(C_a)$, each maximal subset $W^* \subseteq W$ such that $C_a|_{W^*}$ is connected is called a \textbf{block} of $W$.
\end{definition}
So $S_{a,\mu,j}$ is the set of ways to color the beads of a labelled $a$-bead necklace either red or blue such that $\mu$ of the beads are red, and the removal of red beads produces $j$ blue strings. While formally since $a$ is specified in $S_{a,\mu,j}$ it does not need to be specified in $(a,W) \in S_{a,\mu,j}$, in what follows we will be modifying $W$ while transitioning between sets with different values of $a$, and so including the cycle size with $W$ will make the arguments easier to follow.
\begin{lemma} \label{no-edge-lemma} Let $a\geq 3$, $\mu\geq 2$, $j\geq 1$ be integers. Then
\begin{equation}\label{eq:samuj}
|S_{a,\mu,j}|=|S_{a-1,\mu-1,j}|+\sum_{i=\mu+j-2}^{a-2}|S_{i,\mu-1,j-1}|.\end{equation} \end{lemma}
\begin{proof} Note that if $\mu+j>a$, then both sides of the equation are $0$. For all $a\geq 3$ and $\mu+j\leq a$, define \begin{align*}
S_{a,\mu,j}(1,1) &:= \{(a,W)\in S_{a,\mu,j}:v_1\in W, v_a\in W\}, \\
S_{a,\mu,j}(1,0) &:= \{(a,W)\in S_{a,\mu,j}:v_1\in W, v_a\notin W\}, \\
S_{a,\mu,j}(0,1) &:= \{(a,W)\in S_{a,\mu,j}:v_1\notin W, v_a\in W\}, \\
S_{a,\mu,j}(0,0) &:= \{(a,W)\in S_{a,\mu,j}:v_1\notin W, v_a\notin W\}. \end{align*}
Thus, the arguments in the parenthesis are indicators for whether $v_a$ and $v_1$ are in $W$ respectively. It is easy to see that $S_{a,\mu,j}=S_{a,\mu,j}(1,1)\sqcup S_{a,\mu,j}(0,1)\sqcup S_{a,\mu,j}(1,0)\sqcup S_{a,\mu,j}(0,0)$ as a disjoint union.
In what follows, we fix $a\geq 3$ and $\mu,j$ such that $\mu+j\leq a$. We first demonstrate that four auxiliary equations hold by establishing bijections.
\subsection*{Case 1: $S_{a,\mu,j}(1,1)$}
Let \[\varphi_{1,1}:S_{a,\mu,j}(1,1)\to S_{a-1,\mu-1,j}(1,0)\sqcup S_{a-1,\mu-1,j}(1,1)\] be given by $\varphi_{1,1}(a,W)=(a-1,W\setminus\{v_a\})$.
Note that by definition, we have $v_a\in W$ so that $|W\setminus\{v_a\}|=\mu-1$. Further, we note that $C_{a-1}-(W\setminus\{v_a\})$ has $j$ components for all $(a,W)\in S_{a,\mu,j}$ since the block of $(a,W)$ containing $v_a$ and $v_1$ is not split. Also note that we have $v_1\in W\setminus\{v_a\}$, so the function's range is correctly given.
We claim that $\varphi_{1,1}$ is a bijection. To show injectivity, note that if $(a,W_1),(a,W_2)\in S_{a,\mu,j}$ such that $\varphi_{1,1}(a,W_1)=\varphi_{1,1}(a,W_2)$, then $W_1\setminus\{v_a\}=W_2\setminus\{v_a\}$, which implies that $W_1=W_2$ since $a$ is in both $W_1$ and $W_2$ by definition. For surjectivity, note that for all $(a-1,W)\in S_{a-1,\mu-1,j}(1,0)\cup S_{a-1,\mu-1,j}(1,1)$, we may verify that $(a,W\cup\{v_a\})\in S_{a,\mu,j}(1,1)$ and $\varphi_{1,1}(a,W\cup\{v_a\})=(a-1,W)$.
Therefore, $\varphi_{1,1}$ is bijective and $|S_{a,\mu,j}(1,1)|=|S_{a-1,\mu-1,j}(1,0)|+|S_{a-1,\mu-1,j}(1,1)|$.
\subsection*{Case 2: $S_{a,\mu,j}(0,1)$}
For all $(a,W)\in S_{a,\mu,j}(0,1)$, write $W=\{{v_{i_1}},\ldots,{v_{i_{\mu-1}}},v_a\}$, where $1<i_1<\cdots<i_{\mu-1}<a$. Note that if $i_{\mu-1} < a-1$, then between $v_{i_{\mu-1}}$ and $v_a$ lies one component of $C_a-W$, so there are $j-1$ components of $C_a-W$ induced by $\{v_1,\dots,v_{i_{\mu-1}-1}\}-W$. Furthermore, among these $i_{\mu-1}-1$ vertices are at least the remaining $\mu-2$ members of $W$ and $j-1$ other vertices, so when $i_{\mu-1} < a-1$ we have $i_{\mu-1}-1 \geq (\mu-2)+(j-1)$, or $\mu+j-2\leq i_{\mu-1}$. Define \[\varphi_{0,1}:S_{a,\mu,j}(0,1)\to S_{a-1,\mu-1,j}(0,1)\sqcup\ts{\bigsqcup\limits_{i=\mu+j-2}^{a-2}S_{i,\mu-1,j-1}(0,1)}\]
by $\varphi_{0,1}(a,W)=(i_{\mu-1},W\setminus\{v_a\})$ for all $W\in S_{a,\mu,j}(0,1)$. As before, clearly $|W \setminus \{v_a\}| = \mu-1$.
Note that if $i_{\mu-1}=a-1$, then since $v_a\in W$ there remain $j$ components in the image. Otherwise, as the component of $C_a-W$ between $v_{i_{\mu-1}}$ and $v_a$ is deleted, the image has $j-1$ components. Thus, the function's range is correctly given.
We claim that $\varphi_{0,1}$ is a bijection. As above, if $W_1$ and $W_2$ are such that $\varphi_{0,1}(a,W_1)=\varphi_{0,1}(a,W_2)$, then $W_1\setminus\{v_a\}=W_2\setminus\{v_a\}$, implying that $W_1=W_2$ and verifying that $\varphi_{0,1}$ is injective.
We then show that $\varphi_{0,1}$ is surjective. First suppose we have $(a-1,W)\in S_{a-1,\mu-1,j}(0,1)$. Then we may verify that $(a,W\cup\{v_a\})\in S_{a,\mu,j}(0,1)$ and $\varphi_{0,1}(a,W\cup\{v_a\})=(a-1,W)$ since ${a-1}\in W$. Now suppose we have some $\mu+j-2\leq i\leq a-2$ and $(i,W)\in S_{i,\mu-1,j-1}(0,1)$. In this case, we may again check that $(a,W\cup\{v_a\})\in S_{a,\mu,j}(0,1)$ and $\varphi_{0,1}(a,W\cup\{v_a\})=(i,W)$ since $i$ is the second largest index such that $v_i\in W\cup\{a\}$. Therefore, we conclude that $\varphi_{0,1}$ is a bijection. It follows that
$|S_{a,\mu,j}(0,1)|=|S_{a-1,\mu-1,j}(0,1)|+\sum_{i=\mu+j-2}^{a-2}|S_{i,\mu-1,j-1}(0,1)|$.
\subsection*{Case 3: $S_{a,\mu,j}(1,0)$}
For $(a,W)\in S_{a,\mu,j}(1,0)$, write $W=\{v_1,v_{i_2},\ldots,v_{i_\mu}\}$ where $1 <i_2<\cdots<i_\mu<a$. As before, there is a component of $C_a-W$ consisting of the vertices between $v_{i_{\mu}}$ and $v_1$, so there are $j-1$ components of $C_a-W$ induced by $\{v_1,\dots,v_{i_{\mu}}\}$. Thus, among these $i_{\mu}$ vertices are at least the $\mu$ vertices of $W$ as well as $j-1$ others, so $i_{\mu} \geq \mu+j-1$, and $i_{\mu}-1 \geq \mu+j-2$. Define \[\varphi_{1,0}:S_{a,\mu,j}(1,0)\to\ts{\bigsqcup\limits_{i=\mu+j-2}^{a-2}\bp{S_{i,\mu-1,j-1}(1,1)\sqcup S_{i,\mu-1,j-1}(1,0)}}\]
such that $\varphi_{1,0}(a,W)=(i_\mu-1,W\setminus\{{i_\mu}\})$ for all $(a,W)\in S_{a,\mu,j}(1,0)$. Clearly $|W\setminus \{v_{i_{\mu}}\}| = \mu-1$. From the above we indeed have $\mu+j-2\leq i_\mu-1\leq a-2$. Furthermore, since $v_1\in W$, $v_a\notin W$, and $v_{i_{\mu}}\in W$, we know that $C_{i_\mu-1}-(W\setminus\{{v_{i_\mu}}\})$ has $j-1$ components, so the given range is correct.
We show that $\varphi_{1,0}$ is a bijection. Suppose $W_1$ and $W_2$ are such that $\varphi_{1,0}(a,W_1)=\varphi_{1,0}(a,W_2)$. Write $W_1=\{v_1,v_{i_2},\ldots,{v_{i_\mu}}\}$ and $W_2=\{v_1,v_{j_2},\ldots,v_{j_\mu}\}$. Then $(i_\mu-1,W_1\setminus\{{i_\mu}\})=(j_\mu-1,W_2\setminus\{{j_\mu}\})$. Thus, $i_{\mu}-1 = j_{\mu}-1$ and $W_1 \setminus \{v_{i_{\mu}}\} = W_2 \setminus \{v_{j_{\mu}}\}$, so $W_1=W_2$ and $\varphi_{1,0}$ is injective.
For surjectivity, suppose we choose some $\mu+j-2\leq i\leq a-2$ and some $(i,W)\in S_{i,\mu-1,j-1}(1,1)\cup S_{i,\mu-1,j-1}(1,0)$. Then since $i\leq a-2$, we know ${i+1}\neq a$, so we may verify that $C_a-(W\cup\{{v_{i+1}}\})$ has $j$ components. It follows that $(a,W\cup\{v_{i+1}\})\in S_{a,\mu,j}(1,0)$. It is then straightforward to check that $\varphi_{1,0}(a,W\cup\{v_{i+1}\})=(i,W)$. This proves that $\varphi_{1,0}$ is a surjection, and that
$|S_{a,\mu,j}(1,0)|=\sum_{i=\mu+j-2}^{a-2}\bp{|S_{i,\mu-1,j-1}(1,1)|+|S_{i,\mu-1,j-1}(1,0)|}$.
\subsection*{Case 4: $S_{a,\mu,j}(0,0)$}
Again, for $(a,W)\in S_{a,\mu,j}(0,0)$, we write $W=\{v_{i_1},\ldots,v_{i_\mu}\}$ where $1 < i_1<\cdots<i_\mu < a$. As before it is straightforward to verify that $i_{\mu}-1 \geq \mu+j-2$ (in fact the inequality is stronger in this case, but we do not need this). We then define the map \[\varphi_{0,0}:S_{a,\mu,j}(0,0)\to S_{a-1,\mu-1,j}(0,0)\sqcup\ts{\bigsqcup\limits_{i=\mu+j-2}^{a-2}S_{i,\mu-1,j-1}(0,0)}\] such that \[\varphi_{0,0}(a,W)=\begin{cases}
(a-1, W\setminus\{v_{i_\mu}\}) & \text{ if } v_{i_\mu-1}\in W \\
(i_\mu-1, W\setminus\{v_{i_\mu}\}) & \text{ if } v_{i_\mu-1}\notin W. \end{cases}\] Note that if $v_{i_\mu-1}\in W$, then $C_{a-1}-(W\setminus\{v_{i_\mu}\})$ also has $j$ components. Furthermore, since $v_a \notin W$, by definition either $i_{\mu} = a-1$ or $v_{a-1} \notin W$, and either way $v_{a-1} \notin W \setminus \{v_{i_{\mu}}\}$, which means that $\varphi_{0,0}(a,W)\in S_{a-1,\mu-1,j}(0,0)$ when $v_{i_\mu-1}\in W$. If $v_{i_\mu-1}\notin W$, then $C_{i_\mu-1}-(W\setminus\{v_{i_\mu}\})$ has $j-1$ components, and thus $\varphi_{0,0}(a,W)\in S_{i_\mu-1,\mu-1,j-1}(0,0)$, so the range of $\varphi_{0,0}$ is correctly given.
We claim that $\varphi_{0,0}$ is a bijection. Suppose $W_1$ and $W_2$ are such that $\varphi_{0,0}(a,W_1)=\varphi_{0,0}(a,W_2)$. Write $W_1=\{v_{i_1},\ldots,v_{i_\mu}\}$ where $1<i_1<\cdots<i_\mu<a$, and write $W_2=\{v_{j_1},\ldots,v_{j_\mu}\}$ where $1<j_1<\cdots<j_\mu<a$.
If $v_{i_\mu-1}\in W_1$, then $\varphi_{0,0}(a,W_1)=(a-1,W_1\setminus\{v_{i_\mu}\})=\varphi_{0,0}(a,W_2)$. Since $j_\mu-1\neq a-1$, we must have $v_{j_\mu-1}\in W_2$. It follows that $\varphi_{0,0}(a,W_2)=(a-1,W_2\setminus\{v_{j_\mu}\})$, which means that $W_1\setminus\{v_{i_\mu}\}=W_2\setminus\{v_{j_\mu}\}$. Furthermore, since $i_{\mu-1} = i_{\mu}-1$ and $j_{\mu-1} = j_{\mu}-1$, we must have $i_\mu=j_\mu$, which shows that $W_1=W_2$.
If $v_{i_\mu-1}\notin W_1$, then $\varphi_{0,0}(a,W_1)=(i_\mu-1,W_1\setminus\{v_{i_\mu}\})=\varphi_{0,0}(a,W_2)$. Since $i_\mu-1\neq a-1$, we must have $v_{j_\mu-1}\notin W_2$. Thus, $(i_\mu-1,W_1\setminus\{v_{i_\mu}\})=(j_\mu-1,W_2\setminus\{v_{j_\mu}\})$, and as above it follows that $i_\mu=j_\mu$ and thus $W_1=W_2$. This shows that $\varphi_{0,0}$ is injective.
We show that $\varphi_{0,0}$ is surjective. First consider some $(a-1,W)\in S_{a-1,\mu-1,j}(0,0)$. Write $W=\{v_{i_1},\ldots,v_{i_{\mu-1}}\}$ such that $i_1<\cdots<i_{\mu-1}$. Since $i_{\mu-1}\leq a-2$, it is easy to see that $(a,W\cup\{v_{1+i_{\mu-1}}\})\in S_{a,\mu,j}(0,0)$, and $\varphi_{0,0}(a,W\cup\{v_{1+i_{\mu-1}}\})=(a-1,W)$.
Next, suppose we have some $i \in \{\mu+j-2, \dots, a-2\}$ and some $(i,W)\in S_{i,\mu-1,j-1}(0,0)$. Since $v_i\notin W$, we know that $C_a-(W\cup\{v_{i+1}\})$ has $j$ components. Also since $i\leq a-2$, we have $v_a\notin W\cup\{v_{i+1}\}$, and thus $(a,W\cup\{v_{i+1}\})\in S_{a,\mu,j}(0,0)$, and $\varphi_{0,0}(a,W\cup\{v_{i+1}\})=(i,W)$. Therefore, $\varphi_{0,0}$ is a surjection and thus a bijection. This means that
$|S_{a,\mu,j}(0,0)|=|S_{a-1,\mu-1,j}(0,0)|+\sum_{i=\mu+j-2}^{a-2}|S_{i,\mu-1,j-1}(0,0)|$.
Finally, combining all four cases together, we have \begin{align*}
|S_{a,\mu,j}| &= |S_{a,\mu,j}(1,1)|+|S_{a,\mu,j}(0,1)|+|S_{a,\mu,j}(1,0)|+|S_{a,\mu,j}(0,0)| \\
&= |S_{a-1,\mu-1,j}(1,0)|+|S_{a-1,\mu-1,j}(1,1)| \\
&\quad +|S_{a-1,\mu-1,j}(0,1)|+\ts{\sum\limits_{i=\mu+j-2}^{a-2}|S_{i,\mu-1,j-1}(0,1)|} \\
&\quad +\ts{\sum\limits_{i=\mu+j-2}^{a-2}\bp{|S_{i,\mu-1,j-1}(1,1)|+|S_{i,\mu-1,j-1}(1,0)|}} \\
&\quad +|S_{a-1,\mu-1,j}(0,0)|+\ts{\sum\limits_{i=\mu+j-2}^{a-2}|S_{i,\mu-1,j-1}(0,0)|} \\
&=|S_{a-1,\mu-1,j}|+\sum_{i=\mu+j-2}^{a-2}|S_{i,\mu-1,j-1}|, \end{align*} as desired.
\end{proof}
With this auxiliary lemma, we may prove Conjecture \ref{conjecture} for graphs with no edges.
\begin{theorem} \label{no-edge-thm} Let $(G,\omega)$ be a set-weighted graph with $n$ vertices, total weight $d$, and no edges.
Write $X_{(G,\omega)}=\sum_{\lambda\vdash d}c_{\lambda}e_{\lambda}$. Let $\mu \leq d$ be an integer (viewed as a partition with a single part), and fix $j \in \{0, \dots, d-\mu\}$. Then \begin{equation} \label{no-edge-conjecture-equation} \sigma_{\mu,j}\bp{X_{(G,\omega)}}=(-1)^{d-n}\sum_{\substack{\swts(\gamma,S)=(\mu,j) \\ S\text{ admissible}}}\sgn(\gamma,S), \end{equation} summed over all acyclic orientations $\gamma$ of $G$ and all $\gamma$-admissible generalized $2$-step weight maps $S$ of $G$ such that $\wts(\gamma,S)=(\mu,j)$.
In particular, the formula of Conjecture \ref{conjecture} holds for graphs with no edges.
\end{theorem}
\begin{proof}
We first claim that for every choice of $a, \mu \geq 1$ and $j \geq 0$, \begin{equation}
\label{single-vertex}
\sigma_{\mu,j}(p_a)=(-1)^{a+\mu}|S_{a,\mu,j}|. \end{equation}
We proceed by showing that the left- and right-hand sides of this equation satisfy the same base cases and the same recurrence relation. For each $a,\mu,j$ as above, let $T_{a,\mu,j}=(-1)^{a+\mu}|S_{a,\mu,j}|$, the quantity on the right-hand side of \eqref{single-vertex}.
We will make use of Newton's identity \cite{mac}: \begin{equation}\label{newton}
p_a=(-1)^{a-1}ae_a+\sum_{i=1}^{a-1}(-1)^{a-1+i}e_{a-i}p_i. \end{equation}
First, for the base cases, note that $p_1 = e_1$ and $p_2 = e_{11}-2e_2$, so \[\sigma_{1,0}(p_1)=1=T_{1,1,0},\quad \sigma_{1,0}(p_2)=0=T_{2,1,0}, \quad \sigma_{1,1}(p_2)=-2=T_{2,1,1}, \quad \sigma_{2,0}(p_2) = 1 = T_{2,2,0}\] and it is easy to check that for any other choice of $a \in \{1,2\}, \mu \geq 1, j \geq 0$ both sides of \eqref{single-vertex} evaluate to $0$.
If $j=0$, then both sides of \eqref{single-vertex} are $0$ except when $\mu=a$, in which case we may verify from \eqref{newton} that $\sigma_{a,0}(p_a) = 1 = T_{a,a,0}$.
If $\mu = 1$, then both sides of \eqref{single-vertex} are $0$ unless $a=1$ and $j=0$, which was checked above, or $j=1$, in which case we may verify from \eqref{newton} that $\sigma_{1,1}(p_a) = (-1)^{a-1}a = T_{a,1,1}$. This establishes all necessary base cases.
For the recursive step, by Lemma \ref{no-edge-lemma}, we see that for $a\geq 3,\mu \geq 2,j \geq 1$ we have \begin{align}
T_{a,\mu,j} &= (-1)^{a+\mu}\lrp{|S_{a-1,\mu-1,j}|+\sum_{i=\mu+j-2}^{a-2}|S_{i,\mu-1,j-1}|} \nonumber \\
&= (-1)^{a+\mu-2}|S_{a-1,\mu-1,j}|+\sum_{i=\mu+j-2}^{a-2}(-1)^{a-1+i}\cdot(-1)^{i+\mu-1}|S_{i,\mu-1,j-1}| \nonumber \\
&= T_{a-1,\mu-1,j}+\sum_{i=\mu+j-2}^{a-2}(-1)^{a-1+i}T_{i,\mu-1,j-1}.
\label{T-recurrence} \end{align}
Furthermore, when $a\geq 3$, $\mu\geq 2$, and $j\geq 1$, it follows from applying $\sigma_{\mu,j}$ to both sides of \eqref{newton} that \begin{equation} \label{sigma-recurrence} \sigma_{\mu,j}(p_a)=\sigma_{\mu-1,j}(p_{a-1})+\sum_{i=\mu+j-2}^{a-2}(-1)^{a-1+i}\sigma_{\mu-1,j-1}(p_i). \end{equation} where we may determine that $\sigma_{\mu-1,j-1}(p_i) = 0$ for $i < \mu+j-2$ since by definition $\sigma_{\mu-1,j-1}$ will only give nonzero evaluation on $e$-basis terms that are homogeneous with degree at least $2(j-1)+(\mu-1-(j-1)) = (j-1)+(\mu-1) = \mu+j-2$.
Thus, combining \eqref{T-recurrence} and \eqref{sigma-recurrence}, we have shown that both sides of \eqref{single-vertex} have the same recurrence relation. Since they also have the same base cases, both sides are equal for all relevant $a,\mu,j$.
Now, suppose $(G,\omega)$ has vertices of weights $\lambda_1\geq\cdots\geq \lambda_n$. We first consider the left-hand side of \eqref{no-edge-conjecture-equation}. Since $G$ has no edges, from the definition of $\sigma$ we obtain \[\sigma_{\mu,j}\bp{X_{(G,\omega)}}=\sigma_{\mu,j}(p_{(\lambda_1,\ldots,\lambda_n)})=\sigma_{\mu,j}(p_{\lambda_1}\cdots p_{\lambda_n})=\sum_{\substack{(k_1,\ldots,k_n) \\ k_1+\cdots+k_n=\mu}}\sum_{\substack{(\ell_1,\ldots,\ell_n) \\ \ell_1+\cdots+\ell_n=j}}\sigma_{k_1,\ell_1}(p_{\lambda_1})\cdots \sigma_{k_n,\ell_n}(p_{\lambda_n}),\] summed over all tuples $(k_1,\ldots,k_n)$ of positive integers such that $k_1+\cdots+k_n=\mu$, and all tuples $(\ell_1,\ldots,\ell_n)$ of non-negative integers with $\ell_1+\cdots+\ell_n=j$.
We then consider the right-hand side of \eqref{no-edge-conjecture-equation}. Since $G$ has no edges, there is only one acyclic orientation, i.e. the empty orientation. Then by definition, we have \begin{align*}
(-1)^{d-n}\sum_{\substack{\swts(\gamma,S)=(\mu,j) \\ S\text{ admissible}}}\sgn(\gamma,S)
&= (-1)^{d-n}\sum_{\substack{(k_1,\ldots,k_n) \\ k_1+\cdots+k_n=\mu}}\sum_{\substack{(\ell_1,\ldots,\ell_n) \\ \ell_1+\cdots+\ell_n=j}}(-1)^{\mu-n}|S_{\lambda_1,k_1,\ell_1}|\cdots|S_{\lambda_n,k_n,\ell_n}|\end{align*}
where the sum runs over the same tuples as in the previous equation.
Applying \eqref{single-vertex} and comparing to the previous equation, it is straightforward to verify that this is equal to
\begin{align*}
\sigma_{\mu,j}\bp{X_{(G,\omega)}}, \end{align*} and this finishes the proof.
\end{proof}
\subsubsection{Graphs with Two Vertices Connected by an Edge}
In this section we will continue building upon ideas used in the previous proof.
\begin{lemma}\label{lem:one-edge} Let $a$ and $b$ be positive integers such that $a\leq b$, let $\mu$ be a positive integer such that $\mu \leq a$ or $b \leq \mu \leq a+b$, and let $j \in \{0, \dots, a+b-\mu\}$, where $j = 0$ is only allowed if $\mu = a+b$. Then \begin{equation} \label{one-edge-lemma}
-|S_{a+b,\mu,j}|+\sum_{k=1}^{\mu-1}\sum_{\ell=0}^{j}|S_{a,k,\ell}|\cdot |S_{b,\mu-k,j-\ell}|= \begin{cases} (-1)^j\cdot 2\binom{a}{j} & \text{ if }\mu=a=b \\
(-1)^j\binom{a}{j}-|S_{b,\mu,j}| & \text{ if }a<b\text{ and }\mu\in\{a,b\} \\
-|S_{a,\mu,j}|-|S_{b,\mu,j}| & \text{ if }\mu<a\text{ or }\mu>b \end{cases}. \end{equation} \end{lemma}
Note: as will be demonstrated later, the need for proving this equation arises from applying deletion-contraction to a graph with two vertices and one edge and expanding the result using Theorem \ref{no-edge-thm}.
\begin{proof}
First, it is straightforward to verify that \eqref{one-edge-lemma} holds when $\mu = a+b$ and $j = 0$. Thus, for the remainder of this proof we may assume that $j$ is a positive integer.
We will introduce notation that will be used throughout this proof.
For $a \leq b$ positive integers, and $(i_1,i_2)\in\{0,1\}^2$, define \[S_{a,\mu,j}^{a+b}(i_1,i_2):=\{(a+b,W):(a,W)\in S_{a,\mu,j}(i_1,i_2)\}.\] and \[S_{b,\mu,j}^{a+b}(i_1,i_2):=\{(a+b,W):(b,\{w-a:w \in W\})\in S_{b,\mu,j}(i_1,i_2)\}.\]
That is, $S_{a,\mu,j}^{a+b}(i_1,i_2)$ is the set of choices of $W$ along an $a+b$ vertex cycle such that all $w \in W$ are among $\{v_1,\dots,v_a\}$, and that if we broke the cycle between $v_a$ and $v_{a+1}$ and also between $v_{a+b}$ and $v_1$, and reattached $v_1$ to $v_a$, the result would be a valid element of $S_{a,\mu,j}(i_1,i_2)$. Analogously, $S_{b,\mu,j}^{a+b}(i_1,i_2)$ is when instead all elements of $W$ are among $v_{a+1}, \dots, v_{a+b}$, and forming the cycle by breaking the same way and attaching $v_{a+1}$ to $v_{a+b}$ (and subtracting $a$ from all vertex labels) we get a valid element of $S_{b,\mu,j}(i_1,i_2)$.
For $(i_1,i_2,i_3,i_4)\in\{0,1\}^4$, we define $S_{a+b,\mu,j}(i_1,i_2,i_3,i_4)\subseteq S_{a+b,\mu,j}$ as subsets $(a+b,W)$ where $i_1$ is an indicator for whether $v_1$ is selected, $i_2$ is an indicator for whether $v_a$ is selected, $i_3$ is an indicator for whether $v_{a+1}$ is selected, and $i_4$ is an indicator for whether $v_{a+b}$ is selected.
For example, we have $S_{a+b,\mu,j}(0,1,1,0)=\{(a+b,W)\in S_{a+b,\mu,j}:v_1,v_{a+b}\notin W,v_a,v_{a+1}\in W\}$.
Note that \begin{itemize}
\item If $(i_1,i_2)=(0,0)$ and $(i_3,i_4) \neq (1,1)$, then $S_{b,\mu,j}^{a+b}\subseteq S_{a+b,\mu,j}(i_1,i_2,i_3,i_4)$.
\item If $(i_1,i_2)\neq (0,0)$, then $S_{b,\mu,j}^{a+b}\cap S_{a+b,\mu,j}(i_1,i_2,i_3,i_4)=\varnothing$.
\item If $(i_3,i_4)=(0,0)$ and $(i_1,i_2) \neq (1,1)$, then $S_{a,\mu,j}^{a+b}\subseteq S_{a+b,\mu,j}(i_1,i_2,i_3,i_4)$.
\item If $(i_3,i_4)\neq (0,0)$, then $S_{a,\mu,j}^{a+b}\cap S_{a+b,\mu,j}(i_1,i_2,i_3,i_4)=\varnothing$. \end{itemize}
We first simplify the left-hand side of \eqref{one-edge-lemma}. Let \[P=\{(1,1,0,0),(1,0,0,1),(0,1,1,0),(0,0,1,1)\}.\] We claim that for all $(i_1,i_2,i_3,i_4)\in\{0,1\}^4\setminus P$, there is a bijection between \[ \bigsqcup_{k=1}^{\mu-1}\bigsqcup_{\ell=0}^{j}S_{a,k,\ell}(i_1,i_2)\times S_{b,\mu-k,j-\ell}(i_3,i_4) \quad\text{and}\quad S_{a+b,\mu,j}(i_1,i_2,i_3,i_4)\setminus\bp{S_{a,\mu,j}^{a+b}(i_1,i_2)\cup S_{b,\mu,j}^{a+b}(i_3,i_4)}. \]
Indeed, let $\varphi$ be a function from the left set to the right set given by \[\varphi\bp{(a,W_1),(b,W_2)}=\bp{a+b,W_1\cup\{v_{a+i}:i\in W_2\}}.\] One can check manually case by case that this is a well-defined function from the left set to the right set (in particular, that the number of components induced by $((a,W_1),(b,W_2))$ is correct), and that it is a bijection.
For $(i_1,i_2,i_3,i_4)\in P$, it is straightforward to check that the above map $\phi$ gives a bijection between each of the following pairs: \[ \bigsqcup_{k=1}^{\mu-1}\bigsqcup_{\ell=0}^{j}S_{a,k,\ell}(1,1)\times S_{b,\mu-k,j-\ell}(0,0) \quad\text{and}\quad S_{a+b,\mu,j+1}(1,1,0,0)\setminus S_{a,\mu,j}^{a+b}(1,1), \] \[ \bigsqcup_{k=1}^{\mu-1}\bigsqcup_{\ell=0}^{j}S_{a,k,\ell}(1,0)\times S_{b,\mu-k,j-\ell}(0,1) \quad\text{and}\quad S_{a+b,\mu,j-1}(1,0,0,1), \] \[ \bigsqcup_{k=1}^{\mu-1}\bigsqcup_{\ell=0}^{j}S_{a,k,\ell}(0,1)\times S_{b,\mu-k,j-\ell}(1,0) \quad\text{and}\quad S_{a+b,\mu,j-1}(0,1,1,0), \] \[ \bigsqcup_{k=1}^{\mu-1}\bigsqcup_{\ell=0}^{j}S_{a,k,\ell}(0,0)\times S_{b,\mu-k,j-\ell}(1,1) \quad\text{and}\quad S_{a+b,\mu,j+1}(0,0,1,1)\setminus S_{b,\mu,j}^{a+b}(1,1). \]
Thus, \begin{align*}
&\quad \sum_{k=1}^{\mu-1}\sum_{\ell=0}^{j} |S_{a,k,\ell}|\cdot|S_{b,\mu-k,j-\ell}|
= \sum_{k=1}^{\mu-1}\sum_{\ell=0}^{j} \lrp{\ts{\sum\limits_{i_1,i_2}|S_{a,k,\ell}(i_1,i_2)|}}\cdot\lrp{\ts{\sum\limits_{i_3,i_4}|S_{b,\mu-k,j-\ell}(i_3,i_4)|}} \\
&= \sum_{k=1}^{\mu-1}\sum_{\ell=0}^{j}\sum_{i_1,\ldots,i_4}
|S_{a,k,\ell}(i_1,i_2)|\cdot|S_{b,\mu-k,j-\ell}(i_3,i_4)| \\
&= \sum_{i_1,\ldots,i_4}\sum_{k=1}^{\mu-1}\sum_{\ell=0}^{j}|S_{a,k,\ell}(i_1,i_2)|\cdot|S_{b,\mu-k,j-\ell}(i_3,i_4)| \\
&= \sum_{(i_1,\ldots,i_4)\notin P}\sum_{k=1}^{\mu-1}\sum_{\ell=0}^{j}|S_{a,k,\ell}(i_1,i_2)|\cdot|S_{b,\mu-k,j-\ell}(i_3,i_4)| \\
&\quad+ \sum_{(i_1,\ldots,i_4)\in P}\sum_{k=1}^{\mu-1}\sum_{\ell=0}^{j}|S_{a,k,\ell}(i_1,i_2)|\cdot|S_{b,\mu-k,j-\ell}(i_3,i_4)| \\
&= (|S_{a+b,\mu,j}|-|S_{a+b,\mu,j}(1,1,0,0)|-|S_{a+b,\mu,j}(1,0,0,1)|-|S_{a+b,\mu,j}(0,1,1,0)|-|S_{a+b,\mu,j}(0,0,1,1)|
\\
&\quad-|S_{a,\mu,j}|-|S_{b,\mu,j}|+|S_{a,\mu,j}^{a+b}(1,1)|+|S_{b,\mu,j}^{a+b}(1,1)|)
\\
&\quad+ (|S_{a+b,\mu,j+1}(1,1,0,0)|+|S_{a+b,\mu,j-1}(1,0,0,1)|+|S_{a+b,\mu,j-1}(0,1,1,0)|+|S_{a+b,\mu,j+1}(0,0,1,1)|
\\
&\quad- |S_{a,\mu,j}^{a+b}(1,1)|-|S_{b,\mu,j}^{a+b}(1,1)|)
\\
&= -|S_{a,\mu,j}|-|S_{b,\mu,j}|+|S_{a+b,\mu,j}| \\
&\quad -|S_{a+b,\mu,j}(1,1,0,0)|-|S_{a+b,\mu,j}(1,0,0,1)|-|S_{a+b,\mu,j}(0,1,1,0)|-|S_{a+b,\mu,j}(0,0,1,1)| \\
&\quad +|S_{a+b,\mu,j+1}(1,1,0,0)|+|S_{a+b,\mu,j-1}(1,0,0,1)|+|S_{a+b,\mu,j-1}(0,1,1,0)|+|S_{a+b,\mu,j+1}(0,0,1,1)|, \end{align*} where we use the bijections as described previously.
Therefore, the left-hand side of \eqref{one-edge-lemma} becomes \begin{align}
\label{lhs}
&-|S_{a,\mu,j}|-|S_{b,\mu,j}| \nonumber \\
& -|S_{a+b,\mu,j}(1,1,0,0)|-|S_{a+b,\mu,j}(1,0,0,1)|-|S_{a+b,\mu,j}(0,1,1,0)|-|S_{a+b,\mu,j}(0,0,1,1)| \nonumber \\
& +|S_{a+b,\mu,j+1}(1,1,0,0)|+|S_{a+b,\mu,j-1}(1,0,0,1)|+|S_{a+b,\mu,j-1}(0,1,1,0)|+|S_{a+b,\mu,j+1}(0,0,1,1)| \end{align}
We now define a number of sets that will help to break the proof down into smaller components.
Consider \begin{align*}
A=\{&(a+b,W)\in S_{a+b,\mu,j-1}(0,1,1,0)\cup S_{a+b,\mu,j+1}(0,0,1,1): \\
&\text{for all }i\in\{1,\dots,a\},\text{exactly one of }v_i\text{ and }v_{i+b}\text{ is in }W \}, \end{align*} \[B=\bp{S_{a+b,\mu,j-1}(0,1,1,0)\cup S_{a+b,\mu,j+1}(0,0,1,1)}\setminus A,\] \begin{align*}
C=\{&(a+b,W)\in S_{a+b,\mu,j}(0,1,1,0)\cup S_{a+b,\mu,j}(0,0,1,1): \\
&\text{for all }i\in\{1,\dots,a\},\text{exactly one of }v_i\text{ and }v_{i+b}\text{ is in }W \}, \end{align*} \[D=\bp{S_{a+b,\mu,j}(0,1,1,0)\cup S_{a+b,\mu,j}(0,0,1,1)}\setminus C.\] Similarly, let \begin{align*}
A'=\{&(a+b,W)\in S_{a+b,\mu,j-1}(1,1,0,0)\cup S_{a+b,\mu,j+1}(1,0,0,1): \\
&\text{for all }i\in\{1,\dots,a\},\text{exactly one of }v_i\text{ and }v_{i+b}\text{ is in }W \}, \end{align*} \[B'=\bp{S_{a+b,\mu,j-1}(1,1,0,0)\cup S_{a+b,\mu,j+1}(1,0,0,1)}\setminus A',\] \begin{align*}
C'=\{&(a+b,W)\in S_{a+b,\mu,j}(1,1,0,0)\cup S_{a+b,\mu,j}(1,0,0,1): \\
&\text{for all }i\in\{1,\dots,a\},\text{exactly one of }v_i\text{ and }v_{i+b}\text{ is in }W \}, \end{align*} \[D'=\bp{S_{a+b,\mu,j}(1,1,0,0)\cup S_{a+b,\mu,j}(1,0,0,1)}\setminus C'.\]
Note that all of these sets depend on $a,b,\mu,j$, but we suppress explicit mention of this for clarity.
However, we claim that for any choice of $a,b,\mu,j$, we have $|D|=|B|$. Consider the function $\varphi:D\to B$ defined as follows. Fix $(a+b,W)\in D$. We let $i_0$ be the largest index from $\{1,\ldots,a-1\}$ such that either both $v_{i_0}$ and $v_{i_0+b}$ are in $W$ or both $v_{i_0}$ and $v_{i_0+b}$ are not in $W$, which exists by construction. Then $\varphi(a+b,W)=(a+b,W')$, where: \begin{itemize}
\item For $i_0<i\leq a$, $v_i\in W'$ if and only if $v_{i+b}\in W$, or equivalently, $v_i\in W'$ if and only if $v_i\notin W$;
\item For $i_0+b<i\leq a+b$, $v_i\in W'$ if and only if $v_{i-b}\in W$, or equivalently, $v_i\in W'$ if and only if $v_i\notin W$;
\item For all other $i \in \{1,\dots,a+b\}$, $v_i\in W'$ if and only if $v_i\in W$. \end{itemize}
Now if the function's range is indeed $B$, then it is easy to see that it is a bijection between $D$ and $B$ since it is clearly reversible.
By construction, clearly $|W'|=\mu$, and it is also easy to see that $\varphi(a+b,W)\notin A$ since $(a+b,W)$ was not in $C$.
It thus suffices to prove that $\varphi(a+b,W)\in S_{a+b,\mu,j-1}(0,1,1,0)\cup S_{a+b,\mu,j+1}(0,0,1,1)$, meaning that the number of components of $C_{a+b} \setminus W$ and the inclusions of $v_1,v_a,v_{a+1},v_b$ match one of the two possible cases.
Let $\varphi(a+b,W)=(a+b,W')$. In the graph $C_{a+b} \setminus W$, we define the following induced subgraphs: \begin{itemize}
\item $G_1$ is the subgraph induced by $\{v_1,\dots,v_{i_0}\} \setminus W$ and has $\ell_1$ components.
\item $G_a$ is the subgraph induced by $\{v_{i_0+1}, \dots, v_a\} \setminus W$ and has $\ell_a$ components.
\item $G_{a+1}$ is the subgraph induced by $\{v_{a+1}, \dots, v_{i_0+b}\} \setminus W$ and has $\ell_{a+1}$ components.
\item $G_{a+b}$ is the subgraph induced by $\{v_{i_0+b+1}, \dots, v_{a+b}\} \setminus W$ and has $\ell_{a+b}$ components. \end{itemize}
We define induced subgraphs $G_1', G_a', G_{a+1}', G_{a+b}'$ of $C_{a+b} \setminus W'$ analogously, with number of connected components $\ell_1', \ell_a', \ell_{a+1}', \ell_{a+b}'$ respectively.
Note that according to the construction, each of these graphs is necessarily nonempty, and furthermore $G_1 = G_1', G_{a+1} = G_{a+1}'$, and $V(G_a) \cap V(G_a') = V(G_{a+b}) \cap V(G_{a+b}') = \varnothing$. It follows that $\ell_1 = \ell_1'$, $\ell_{a+1} = \ell_{a+1}'$, and it is simple to verify that \begin{itemize}
\item $\ell'_a = \ell_a$ if exactly one of $v_{i_0+1}$ and $v_a$ is in $W$,
\item $\ell'_a = \ell_a-1$ if $v_{i_0+1}, v_a \notin W$,
\item $\ell'_a = \ell_a+1$ if $v_{i_0+1}, v_a \in W$, \end{itemize} and analogously for $\ell'_{a+b}$.
We first suppose that $(a+b,W)\in S_{a+b,\mu,j}(0,1,1,0)$. By construction, $v_1,v_a\notin W'$ and $v_{a+1},v_{a+b}\in W'$. Now: \begin{itemize}
\item If $v_{i_0},v_{i_0+1}\notin W$, then by construction, we have $v_{i_0+b}\notin W$ and $v_{i_0+b+1}\in W$, and it is straightforward to verify that $j=\ell_1+\ell_a+\ell_{a+1}+\ell_{a+b}-2$, since two components are joined between $G_{a+b}$ and $G_1$, and between $G_1$ and $G_a$.
We then have $v_{i_0}\notin W'$, $v_{i_0+1}\in W'$, $v_{i_0+b}\notin W'$, and $v_{i_0+b+1}\notin W'$. Furthermore, using the above observations, it is easy to verify that $\ell_i' = \ell_i$ for $i \in \{1,a,a+1,a+b\}$, and that the only components of $C_{a+b} \setminus W'$ unified across different $G_i$ are between $G'_{a+1}$ and $G'_{a+b}$, so there are $\ell'_1+\ell'_a+\ell'_{a+1}+\ell'_{a+b}-1=j+1$ components.
\item If $v_{i_0}\notin W$ and $v_{i_0+1}\in W$, then by construction, we have $v_{i_0+b}\notin W$ and $v_{i_0+b+1}\notin W$, and similar to the above argument we may verify that $j=\ell_1+\ell_a+\ell_{a+1}+\ell_{a+b}-2$.
We then have $v_{i_0}\notin W'$, $v_{i_0+1}\notin W'$, $v_{i_0+b}\notin W'$, $v_{i_0+b+1}\in W'$, and we have $\ell'_a = \ell_a+1$ and $\ell'_{a+b} = \ell_{a+b}-1$. Then we may check that in $C_{a+b} \setminus W'$ there are $\ell'_1+\ell'_a+\ell'_{a+1}+\ell'_{a+b}-1=j+1$ components.
\item If $v_{i_0}\in W$ and $v_{i_0+1}\notin W$, then $i_0+b,v_{i_0+b+1}\in W$, and $j=\ell_1+\ell_a+\ell_{a+1}+\ell_{a+b}-1$.
We then have $v_{i_0}\in W'$, $v_{i_0+1}\in W'$, $v_{i_0+b}\in W'$, and $v_{i_0+b+1}\notin W'$, and we may check that $\ell'_a = \ell_a$ and $\ell'_{a+b} = \ell_{a+b}$, so in $C_{a+b} \setminus W'$ there are $\ell'_1+\ell'_a+\ell'_{a+1}+\ell'_{a+b}=j+1$ components.
\item If $v_{i_0},v_{i_0+1}\in W$, we then know that $v_{i_0+b}\in W$, $v_{i_0+b+1}\notin W$, and $j=\ell_1+\ell_a+\ell_{a+1}+\ell_{a+b}-1$.
We then have $v_{i_0}\in W'$, $v_{i_0+1}\notin W'$, $v_{i_0+b}\in W'$, and $v_{i_0+b+1}\in W'$. We may check that $\ell'_a = \ell_a+1$ and $\ell'_{a+b} = \ell_{a+b}-1$, so in $C_{a+b} \setminus W'$ there are $\ell'_1+\ell'_a+\ell'_{a+1}+\ell'_{a+b}=j+1$ components. \end{itemize}
Thus in every case, $(a+b,W')\in S_{a+b,\mu,j+1}(0,0,1,1)$. An analogous argument shows that if $(a+b,W) \in S_{a+b,\mu,j}(0,0,1,1)$, then $\varphi(a+b,W) \in S_{a+b,\mu,j-1}(0,1,1,0)$, completing the proof that $|D| = |B|$; An essentially identical proof shows that also $|D'| = |B'|$ for any $a,b,\mu,j$.
We now prove \eqref{one-edge-lemma} by evaluating the left-hand side of the equation as given in \eqref{lhs}, splitting into cases for different choices of $a,b,\mu,j$. Within each case, any statements about the sets $A,B,C,D,A',B',C',D'$ hold for all choices of $a,b,\mu,j$ considered by that case.
Throughout the proof, given a graph $G$ and $S \subseteq V(G)$, we let $G[S]$ denote the subgraph of $G$ induced by $S$, and $G \backslash S$ denote the subgraph of $G$ induced by $V(G) \setminus S$.
\subsection*{Case 1. $j$ is even.}
\subsubsection*{Case 1.1. $\mu=a=b$.}
Since we are assuming $j\geq 1$, we have $|S_{a,\mu,j}|=|S_{b,\mu,j}|=0$. Furthermore, using the map $(2a,W) \rightarrow (2a,[2a]\setminus W)$, we may verify that $|S_{2a,a,j}(1,1,0,0)|=|S_{2a,a,j}(0,0,1,1)|$,
$|S_{2a,a,j}(1,0,0,1)|=|S_{2a,a,j}(0,1,1,0)|$,
$|S_{2a,a,j+1}(1,1,0,0)|=|S_{2a,a,j+1}(0,0,1,1)|$, and
$|S_{2a,a,j-1}(1,0,0,1)|=|S_{2a,a,j-1}(0,1,1,0)|$. Thus, \eqref{lhs} becomes
\[2\bp{-|S_{2a,a,j}(0,1,1,0)|-|S_{2a,a,j}(0,0,1,1)|+|S_{2a,a,j-1}(0,1,1,0)|+|S_{2a,a,j+1}(0,0,1,1)|}.\] Using the fact that $j$ is even, it suffices to prove that \begin{equation*}
|S_{2a,a,j}(0,0,1,1)|+|S_{2a,a,j}(0,1,1,0)|+\binom{a}{j}=|S_{2a,a,j-1}(0,1,1,0)|+|S_{2a,a,j+1}(0,0,1,1)|. \end{equation*} In terms of the sets defined above, we equivalently must show that \begin{equation*}
|C|+|D|+\binom{a}{j} = |A|+|B| \end{equation*}
Since $|D| = |B|$, it suffices to show that $|C| = 0$, and $|A| = \binom{a}{j}$.
We first show that $C=\varnothing$. Assume for a contradiction that there exists some $(2a,W)\in C$. Let $H_a = C_{a+b}[\{v_1,\dots,v_a\}]$, and let $H_b = C_{a+b}[\{v_{a+1}, \dots, v_{2a}\}]$. Let $W_a = W \cap V(H_a)$ and let $W_b = W \cap V(H_b)$. Suppose $H_a \setminus W_a$ has $\ell$ components. \begin{itemize}
\item If $(2a,W)\in S_{2a,a,j}(0,1,1,0)$, then since $v_1 \notin W$ and $v_a \in W$, there are also $\ell$ components of the subgraph of $H_a[W_a]$.
Since $(2a,W) \in C$, by symmetry there are $\ell$ components of $H_b \setminus W$.
Now since $v_1,v_{2a}\notin W$ and $v_a,v_{a+1}\in W$, we have that $j=\ell+\ell-1 = 2\ell-1$, which contradicts that $j$ is even.
\item If $(2a,W)\in S_{2a,a,j}(0,0,1,1)$, then since $v_1,v_a \notin W$, there are $\ell-1$ components of $H_a[W_a]$. Since $(2a,W) \in C$, by symmetry there are $\ell-1$ components of $H_b \setminus W$.
Now since $v_{a+1}\in W$ and $v_{2a}\in W$, we have $j=\ell+(\ell-1)=2\ell-1$, which contradicts that $j$ is even. \end{itemize} This proves that $C = \varnothing$.
We then show that $|A|=\binom{a}{j}$. Consider the function $\psi:A\to\{S\subseteq \{1,\dots,a\}:|S|=j\}$ where $\psi(2a,W)=\{i_1,\ldots,i_j\}$ is defined as follows: \begin{itemize}
\item $i_1$ is the largest index such that $v_1,\ldots,v_{i_1}\notin W$ (note that necessarily $i_1 \leq a$).
\item For $k \in \{2,\dots,j\}$, if $k$ is even, $i_k$ is the largest index such that $v_{i_{k-1}+1},\ldots,v_{i_k}\in W$; if $k$ is odd, $i_k$ is the largest index such that $v_{i_{k-1}+1},\ldots,v_{i_k}\notin W$. \end{itemize}
As described, $\phi(2a,W)$ simply produces a set of $j$ positive integers; we claim that indeed $i_j\leq a$. We first suppose $(2a,W)\in S_{2a,a,j-1}(0,1,1,0)$. Using the notation above, suppose there are $\ell$ components of $H_a \setminus W_a$. Then since $v_1 \notin W$ and $v_a \in W$, there are also $\ell$ components of $H_a[W_a]$, and so by the symmetry of $A$, there are also $\ell$ components of $H_b \setminus W_b$. Since $v_a, v_{a+1} \in W$ but $v_1, v_{2a} \notin W$, there are $2\ell-1$ components of $C_{a+b} \setminus W$. This means that $2\ell-1=j-1$, or $\ell=j/2$. Thus there are $j/2$ components in each of $H_a[W_a]$ and $H_a \setminus W_a$, so we may verify that $i_j=a$.
Now suppose $(2a,W)\in S_{2a,a,j+1}(0,0,1,1)$, and suppose there are $\ell$ components of $H_a \setminus W_a$. Then as $v_1,v_a \notin W$, there are $\ell-1$ components of $H_a[W_a]$, so by symmetry also $\ell-1$ components of $H_b \setminus W_b$. Since $v_{a+1},v_{2a} \in W$, there are $2\ell-1$ components of $C_{a+b} \setminus W$, so $2\ell-1=j+1$, or $\ell=j/2+1$, and it follows that $i_j<a$. Hence, the range of $\psi$ is correctly given.
Note that when $(2a,W)\in S_{2a,a,j+1}(0,0,1,1)$, we further know that $v_{i_j+1},\ldots,v_a\notin W$.
We claim that $\psi$ is a bijection. To show injectivity, suppose $(2a,W),(2a,W')\in A$ are such that $\psi(2a,W)=\psi(2a,W')$. Then by construction and our observations above, we have that $W\cap\{1,\ldots,a\}=W'\cap\{1,\ldots,a\}$. By the symmetry of $A$, this implies $W=W'$.
To show surjectivity, suppose we are given $S=\{i_1,\ldots,i_j\}$ such that $1\leq i_1<\cdots<i_j\leq a$. Let \begin{align*}
W = &\,\, \{v_{i_1+1},\ldots,v_{i_2}\}\cup\cdots\cup\{v_{i_{j-1}+1},\ldots,v_{i_j}\} \\
&\cup\{v_{a+1},\ldots,v_{a+i_1}\}\cup\cdots\cup\{v_{a+i_{j-2}+1},\ldots,v_{a+i_{j-1}}\}\cup\{v_{a+i_j+1},\ldots,v_{2a}\}. \end{align*}
Then it is easy to check that $(2a,W)\in A$ and $\psi(2a,W)=S$. This proves that $\psi$ is a bijection and $|A|=\binom{a}{j}$, and this finishes the proof of this case.
\subsubsection*{Case 1.2. $\mu=a<b$.}
Since we assume $j\geq 1$, we have $|S_{a,a,j}|=0$. It then suffices to prove that \begin{align*}
&\quad |S_{a+b,a,j+1}(1,1,0,0)|+|S_{a+b,a,j-1}(1,0,0,1)|+|S_{a+b,a,j-1}(0,1,1,0)|+|S_{a+b,a,j+1}(0,0,1,1)| \\
&= |S_{a+b,a,j}(1,1,0,0)|+|S_{a+b,a,j}(1,0,0,1)|+|S_{a+b,a,j}(0,1,1,0)|+|S_{a+b,a,j}(0,0,1,1)|+\binom{a}{j} \end{align*} In terms of the previously described sets, we equivalently need to show that \begin{equation*}
|A|+|B|+|A'|+|B'| = |C|+|D|+|C'|+|D'|+\binom{a}{j} \end{equation*}
Since $|D| = |B|$ and $|D'| = |B'|$, it is enough to show that $|A| = |C| = |C'| = 0$, and $|A'| = \binom{a}{j}$.
We first claim that $A=\varnothing$. Assume for a contradiction that $A\neq\varnothing$ and let $(a+b,W)\in A$. Since for all $i\in\{1,\dots,a\}$, exactly one of $v_i,v_{i+b}$ is in $W$ and $|W| = a$, it must be the case that $v_{a+1},\ldots,v_b\notin W$, contradicting that $v_{a+1}\in W$ for all $(a+b,W) \in A$.
We then claim that $C=\varnothing$ and $C'=\varnothing$. Assume for a contradiction that there exists some $(a+b,W)\in C\cup C'$. Then as before, we have $v_{a+1},\ldots,v_b\notin W$. We retain the previous notation for $H_a$ and $W_a$, but we now define $H_{b+1} = C_{a+b}[\{v_{b+1},\dots,v_{b+a}\}]$ and $W_{b+1} = W \cap V(H_{b+1})$. Suppose that there are $\ell$ components of $H_a \setminus W_a$. We proceed in a similar manner as in the previous subcase. \begin{itemize}
\item If $(a+b,W)\in S_{a+b,a,j}(1,1,0,0)$, then we may verify that there are $\ell+1$ components of $H_{b+1} \setminus W_{b+1}$. Furthermore, since $v_1 \in W$, $v_{b+1}\notin W$ by construction, so one component of $H_{b+1} \setminus W_{b+1}$ is joined with $\{v_{a+1}, \dots, v_b\}$ as a component in $C_{a+b} \setminus W$. Thus the number of components of $C_{a+b} \setminus W$ is $j=\ell+\ell+1=2\ell+1$, which is odd, a contradiction.
\item If $(a+b,W)\in S_{a+b,a,j}(1,0,0,1)$, then we may verify that there are $\ell$ components of $H_{b+1} \setminus W_{b+1}$. As above, $v_{b+1}\notin W$. Since also $v_a \notin A$, so one component from each of $H_a \setminus W_a$ and $H_{b+1} \setminus W_{b+1}$ are joined via $\{v_{a+1},\dots,v_b\}$, so it follows that $j=\ell+\ell-1=2\ell-1$, which is again a contradiction to $j$ being even.
\item If $(a+b,W)\in S_{a+b,a,j}(0,1,1,0)\cup S_{a+b,a,j}(0,0,1,1)$, then $v_{a+1}\in W$, which is a contradiction. \end{itemize} This that $C = C' = \varnothing$.
Finally, we claim that $|A'|=\binom{a}{j}$. Consider the function $\psi:A'\to\{S\subseteq\{1,\dots,a\}:|S|=j\}$ where $\psi(a+b,W)=\{i_1,\ldots,i_j\}$ is a finite set such that: \begin{itemize}
\item $i_1$ is the largest index such that $1,\ldots,i_1\in W$.
\item For $k \in \{2,\dots,j\}$, if $k$ is even, $i_k$ is the largest index such that $v_{i_{k-1}+1},\ldots,v_{i_k}\notin W$; if $k$ is odd, $i_k$ is the largest index such that $v_{i_{k-1}+1},\ldots,v_{i_k}\in W$. \end{itemize} Using an argument exactly analogous to that of Case 1.1 and the fact that $v_{a+1},\ldots,v_b\notin W$ shows that this is a bijection, and finishes the proof of this subcase.
\subsubsection*{Case 1.3. $a<b=\mu$.}
Since by assumption $j\geq 1$, we have $|S_{a,b,j}|=|S_{b,b,j}|=0$. It then suffices to prove that \begin{align*}
&\quad |S_{a+b,b,j+1}(1,1,0,0)|+|S_{a+b,b,j-1}(1,0,0,1)|+|S_{a+b,b,j-1}(0,1,1,0)|+|S_{a+b,b,j+1}(0,0,1,1)| \\
&= |S_{a+b,b,j}(1,1,0,0)|+|S_{a+b,b,j}(1,0,0,1)|+|S_{a+b,b,j}(0,1,1,0)|+|S_{a+b,b,j}(0,0,1,1)|+\binom{a}{j} \end{align*} As before, this is equivalent to the claim that \begin{equation*}
|A|+|B|+|A'|+|B'| = |C|+|D|+|C'|+|D'|+\binom{a}{j} \end{equation*}
Since $|D| = |B|$ and $|D'| = |B'|$, it is enough to show that $|A'| = |C| = |C'| = 0$, and $|A| = \binom{a}{j}$.
We first claim that $A'=\varnothing$. Assume for a contradiction that $A'\neq \varnothing$ and let $(a+b,W)\in A'$. Since for all $i\in\{1,\dots,a\}$, exactly one of $v_i,v_{i+b}$ is in $W$, and $|W| = b$, it must be the case that $v_{a+1},\ldots,v_b\in W$, contradicting that $v_{a+1}\notin W$ for all $(a+b,W) \in A'$.
We then claim that $C=\varnothing$ and $C'=\varnothing$. The proof is exactly analogous to the corresponding proof in Case 1.2. We assume there are $\ell$ components of $H_a \setminus W_a$. \begin{itemize}
\item If $(a+b,W)\in S_{a+b,b,j}(1,1,0,0)\cup S_{a+b,b,j}(1,0,0,1)$, then $v_{a+1}\notin W$ is a contradiction.
\item If $(a+b,W)\in S_{a+b,b,j}(0,1,1,0)$, then we may verify that $j=2\ell-1$, which is a contradiction to the fact that $j$ is even.
\item If $(a+b,W)\in S_{a+b,b,j}(0,0,1,1)$, then we may again verify that $j=2\ell-1$, which is a contradiction. \end{itemize}
Finally, we claim that $|A|=\binom{a}{j}$, and we will use a bijection exactly analogous to that of the first two cases.
First note that for all $(a+b,W)\in A$, we must have $v_{a+1},\ldots,v_b\in W$. Consider the function $\psi:A\to\{S\subseteq\{1,\dots,a\}:|S|=j\}$ where $\psi(2a,W)=\{i_1,\ldots,i_j\}$ is defined as follows: \begin{itemize}
\item $i_1$ is the largest index such that $v_1,\ldots,v_{i_1}\notin W$.
\item For $k \in \{2,\dots,j\}$, if $k$ is even, $i_k$ is the largest index such that $v_{i_{k-1}+1},\ldots,v_{i_k}\in W$; if $k$ is odd, $i_k$ is the largest index such that $v_{i_{k-1}+1},\ldots,v_{i_k}\notin W$. \end{itemize} Then using an argument analogous to that presented in Case 1.1 and the fact that $v_{a+1},\ldots,v_b\in W$ for all $(a+b,W)\in A$, we can show that $\psi$ is indeed a bijection, and this finishes the proof of this subcase.
\subsubsection*{Case 1.4. $\mu<a<b$ or $a<b<\mu$.}
In this case, we are required to prove that \begin{align*}
&\quad |S_{a+b,\mu,j+1}(1,1,0,0)|+|S_{a+b,\mu,j-1}(1,0,0,1)|+|S_{a+b,\mu,j-1}(0,1,1,0)|+|S_{a+b,\mu,j+1}(0,0,1,1)| \\
&= |S_{a+b,\mu,j}(1,1,0,0)|+|S_{a+b,\mu,j}(1,0,0,1)|+|S_{a+b,\mu,j}(0,1,1,0)|+|S_{a+b,\mu,j}(0,0,1,1)| \end{align*} or equivalently that \begin{equation*}
|A|+|B|+|A'|+|B'| = |C|+|D|+|C'|+|D'| \end{equation*}
But this follows from $|B| = |D|$, $|B'| = |D'|$, and $A=C=A'=C'=\varnothing$, since if $(a+b,W)\in A\cup C\cup A'\cup C'$, then we must have $a\leq|W|\leq b$, which is a contradiction.
\subsection*{Case 2. $j$ is odd.}
\subsubsection*{Case 2.1. $\mu=a=b$.}
As in Case 1.1, except using that $j$ is odd, it suffices to show that
\[|S_{2a,a,j}(0,0,1,1)|+|S_{2a,a,j}(0,1,1,0)|=|S_{2a,a,j-1}(0,1,1,0)|+|S_{2a,a,j+1}(0,0,1,1)|+\binom{a}{j}.\] Equivalently, we would like to show that \begin{equation*}
|C|+|D| = |A|+|B|+\binom{a}{j} \end{equation*}
Since $|D| = |B|$, it suffices to show that $|A| = 0$ and $|C| = \binom{a}{j}$.
We first claim that $A=\varnothing$. Assume for a contradiction that there exists some $(2a,W)\in A$. Using the notation as in Case 1.1, suppose that $H_a \setminus W_a$ has $\ell$ components. \begin{itemize}
\item If $(2a,W)\in S_{2a,a,j-1}(0,1,1,0)$, then as in previous cases we may verify that there are also $\ell$ components of $H_b \setminus W_b$. Since $v_a,v_{a+1} \in W$ and $v_1,v_{2a} \notin W$, it follows that $j-1=\ell+\ell-1 = 2\ell-1$, which contradicts that $j$ is odd.
\item If $(2a,W)\in S_{2a,a,j+1}(0,0,1,1)$, then there are $\ell-1$ components of $H_b \setminus W_b$. Since $v_{a+1} \in W$ and $v_{2a} \in W$ it follows that $j+1=\ell+(\ell-1) = 2\ell-1$, which also contradicts that $j$ is odd. \end{itemize} This proves that $A=\varnothing$.
We then claim that $|C|=\binom{a}{j}$, and we use the same style of bijection as in Case 1.1. Consider the function $\psi:C\to\{S\subseteq\{1,\dots,a\}:|S|=j\}$ where $\psi(2a,W)=\{i_1,\ldots,i_j\}$ is defined as follows: \begin{itemize}
\item $i_1$ is the largest index such that $v_1,\ldots,v_{i_1}\notin W$.
\item For $k \in \{2,\dots,j\}$, if $k$ is even, $i_k$ is the largest index such that $v_{i_{k-1}+1},\ldots,v_{i_k}\in W$; if $k$ is odd, $i_k$ is the largest index such that $v_{i_{k-1}+1},\ldots,v_{i_k}\notin W$. \end{itemize} Using the same argument as in Case 1.1, suppose that $H_a \setminus W_a$ has $\ell$ components. If $(2a,W)\in S_{2a,a,j}(0,1,1,0)$, then there are also $\ell$ components in $H_b \setminus W_b$, so $j=\ell+\ell-1$, and $\ell=\frac{j+1}{2}$, from which it follows that $i_j<a$ since $v_a \in W$. Furthermore, in this case $v_{i_j+1},\ldots,v_a\in W$.
If $(2a,W)\in S_{2a,a,j}(0,0,1,1)$, then there are $\ell-1$ components of $H_b \setminus W_b$, and it follows that $j=2\ell-1$, or $\ell=\frac{j+1}{2}$, and it is easy then to check that $i_j=a$ since $v_a \notin W$ and $v_{a+1} \in W$. Thus, we have verified that the range of $\psi$ is correct.
As in Case 1.1, it is straightforward to verify the injectivity and surjectivity of $\psi$. Therefore, $\psi$ is a bijection, and $|C|=\binom{a}{j}$, which completes the proof of this subcase.
\subsubsection*{Case 2.2. $\mu=a<b$.}
Similar to Case 1.2, it suffices to prove that \begin{align*}
&\quad |S_{a+b,a,j+1}(1,1,0,0)|+|S_{a+b,a,j-1}(1,0,0,1)|+|S_{a+b,a,j-1}(0,1,1,0)|+|S_{a+b,a,j+1}(0,0,1,1)|+\binom{a}{j} \\
&= |S_{a+b,a,j}(1,1,0,0)|+|S_{a+b,a,j}(1,0,0,1)|+|S_{a+b,a,j}(0,1,1,0)|+|S_{a+b,a,j}(0,0,1,1)|. \end{align*} which is equivalent to proving \begin{equation*}
|A|+|B|+|A'|+|B'|+\binom{a}{j} = |C|+|D|+|C'|+|D'| \end{equation*}
Since $|D| = |B|$ and $|D'| = |B'|$, it is sufficient to prove that $|A| = |A'| = |C| = 0$ and $|C| = \binom{a}{j}$.
As in Case 1.2, we show that $C=\varnothing$. If on the contrary there exists $(a+b,W)\in C$. Then $v_{a+1},\ldots,v_b\notin W$ since $|W|=a$ and for each $i \in \{1,\dots,a\}$, exactly one of $v_i$ and $v_{i+b}$ is in $W$. But this contradicts the fact that $v_{a+1}\in W$ for each $(a+b,W) \in C$.
We then claim that $A=\varnothing$ and $A'=\varnothing$. Assume for a contradiction that there exists some $(a+b,W)\in A\cup A'$. Then $v_{a+1},\ldots,v_b\notin W$ since $|W|=a$. Using notation as in Case 1.2, suppose there are $\ell$ components in $H_a \setminus W_a$. \begin{itemize}
\item If $(a+b,W)\in S_{a+b,a,j-1}(0,1,1,0)\cup S_{a+b,a,j+1}(0,0,1,1)$, then $v_{a+1}\in W$, a contradiction.
\item If $(a+b,W)\in S_{a+b,a,j-1}(1,1,0,0)$, then we may verify that $j-1=2\ell+1$, contradicting that $j$ is odd.
\item If $(a+b,W)\in S_{a+b,a,j+1}(1,0,0,1)$, then we may verify that $j+1=2\ell-1$, which again contradicts that $j$ is odd. \end{itemize} Thus, $A=\varnothing$ and $A'=\varnothing$.
Finally, we claim that $|C'|=\binom{a}{j}$. Consider the function $\psi:C'\to\{S\subseteq\{1,\dots,a\}:|S|=j\}$ where $\psi(a+b,W)=\{i_1,\ldots,i_j\}$ defined as follows: \begin{itemize}
\item $i_1$ is the largest index such that $v_1,\ldots,v_{i_1}\in W$.
\item For $k \in \{2,\dots,j\}$, if $k$ is even, $i_k$ is the largest index such that $v_{i_{k-1}+1},\ldots,v_{i_k}\notin W$; if $k$ is odd, $i_k$ is the largest index such that $v_{i_{k-1}+1},\ldots,v_{i_k}\in W$. \end{itemize} Using the same reasoning as in previous cases, it is straightforward to verify that $\psi$ is a bijection, completing the proof of this subcase.
\subsubsection*{Case 2.3. $a<b=\mu$.}
Similar to case 1.3, it suffices to prove that \begin{align*}
&\quad |S_{a+b,b,j+1}(1,1,0,0)|+|S_{a+b,b,j-1}(1,0,0,1)|+|S_{a+b,b,j-1}(0,1,1,0)|+|S_{a+b,b,j+1}(0,0,1,1)| +\binom{a}{j} \\
&= |S_{a+b,b,j}(1,1,0,0)|+|S_{a+b,b,j}(1,0,0,1)|+|S_{a+b,b,j}(0,1,1,0)|+|S_{a+b,b,j}(0,0,1,1)|. \end{align*} or equivalently that \begin{equation*}
|A|+|B|+|A'|+|B'|+\binom{a}{j} = |C|+|D|+|C'|+|D'| \end{equation*}
Since $|D| = |B|$ and $|D'| = |B'|$, it is enough to show that $|A'| = |A| = |C'| = 0$, and $|C| = \binom{a}{j}$.
Note that $C'=\varnothing$, since if on the contrary $(a+b,W)\in C'$, then since exactly one of $v_i$ and $v_{i+b}$ is in $W$ for each $i \in \{1,\dots,a\}$, we must have $v_{a+1},\ldots,v_b\in W$ to have $|W|=b$, contradicting that $v_{a+1}\notin W$ for each $(a+b,W) \in C'$.
We then claim that $A=\varnothing$ and $A'=\varnothing$. Assuming for a contradiction that there exists some $(a+b,W)\in A\cup A'$, then as above $v_{a+1},\ldots,v_b\in W$.
Using notation as in previous cases, suppose that $H_a \setminus W_a$ has $\ell$ components. \begin{itemize}
\item If $(a+b,W)\in S_{a+b,a,j-1}(0,1,1,0)$, then there are $\ell$ components of $H_{b+1} \setminus W_{b+1}$, and it follows that $j-1=2\ell-1$, which contradicts that $j$ is odd.
\item If $(a+b,W)\in S_{a+b,a,j+1}(0,0,1,1)$, then there are $\ell-1$ components of $H_{b+1} \setminus W_{b+1}$. This means $j+1=2\ell-1$, which again contradicts the fact that $j$ is odd.
\item If $(a+b,W)\in S_{a+b,a,j-1}(1,1,0,0)\cup S_{a+b,a,j+1}(1,0,0,1)$, then $v_{a+1}\notin W$, a contradiction. \end{itemize} This proves that $A=\varnothing$ and $A'=\varnothing$.
Finally, show that $|C|=\binom{a}{j}$ by considering the function $\psi:C\to\{S\subseteq\{1,\dots,a\}:|S|=j\}$ where $\psi(a+b,W)=\{i_1,\ldots,i_j\}$ defined as follows: \begin{itemize}
\item $i_1$ is the largest index such that $v_1,\ldots,v_{i_1}\notin W$.
\item For $k \in \{2,\dots,j\}$, if $k$ is even, $i_k$ is the largest index such that $v_{i_{k-1}+1},\ldots,v_{i_k}\in W$; if $k$ is odd, $i_k$ is the largest index such that $v_{i_{k-1}+1},\ldots,v_{i_k}\notin W$. \end{itemize} Using the same argument as in previous cases, we can show that $\psi$ is a bijection, and this completes the proof of this subcase.
\subsubsection*{Case 2.4. $\mu<a<b$ or $a<b<\mu$.}
The proof is exactly the same as Case 1.4.
This concludes the examination of all possible subcases, so we are done.
\end{proof}
With this auxiliary lemma in hand, we can demonstrate that Conjecture \ref{conjecture} holds on graphs with two vertices and one edge.
\begin{theorem} \label{one-edge-thm} Let $(G,\omega)$ be a set-weighted graph such that $V(G)=\{v_1,v_2\}$ and $E(G)=\{v_1v_2\}$, with $w(v_1) = a \leq b = w(v_2)$. Let $\mu$ be a positive integer such that $\mu \leq a$ or $b \leq \mu \leq a+b$. Fix $j \in \{0, \dots, a+b-\mu\}$, where $j=0$ is possible if and only if $\mu = a+b$. Then \begin{equation} \label{one-edge-theorem-equation} \sigma_{\mu,j}\bp{X_{(G,\omega)}}=(-1)^{a+b-2}\sum_{\substack{\swts(\gamma,S)=(\mu,j) \\ S\text{ admissible}}}\sgn(\gamma,S), \end{equation} summed over both (acyclic) orientations $\gamma$ of $G$ and all $\gamma$-admissible generalized $2$-step weight maps $S$ of $G$ such that $\wts(\gamma,S)=(\mu,j)$.
In particular, Conjecture \ref{conjecture} holds for graphs with two vertices with one edge between them. \end{theorem}
\begin{proof}
Let $e=v_1v_2$. We first evaluate the left-hand side of \eqref{one-edge-theorem-equation}. Using the deletion-contraction relation (Lemma \ref{lem:delconset}), we have \[\sigma_{\mu,j}\bp{X_{(G,\omega)}}=\sigma_{\mu,j}\bp{X_{(G\setminus e,\omega)}}-\sigma_{\mu,j}\bp{X_{(G/e,\omega/e)}}.\] Since $G\setminus e$ and $G/e$ both have no edges, we can apply Theorem \ref{no-edge-thm} and get \begin{align*}
\sigma_{\mu,j}\bp{X_{(G,\omega)}}
&=(-1)^{a+b}\sum_{\substack{\swts(G\setminus e,\gamma,S)=(\mu,j) \\ S\text{ admissible}}}\sgn(G\setminus e,\gamma,S)-(-1)^{a+b-1}\sum_{\substack{\swts(G/e,\gamma,S)=(\mu,j) \\ S\text{ admissible}}}\sgn(G/e,\gamma,S) \\
&= (-1)^{a+b}\cdot(-1)^{\mu-2}\sum_{k=1}^{\mu-1}\sum_{\ell=0}^{j}|S_{a,k,\ell}|\cdot|S_{b,\mu-k,j-\ell}|-(-1)^{a+b-1}\cdot(-1)^{\mu-1}|S_{a+b,\mu,j}| \\
&= (-1)^{a+b+\mu}\lrp{-|S_{a+b,\mu,j}|+\sum_{k=1}^{\mu-1}\sum_{\ell=0}^{j}|S_{a,k,\ell}|\cdot|S_{b,\mu-k,j-\ell}|}. \end{align*} Applying Lemma \ref{lem:one-edge}, we thus have \[ \sigma_{\mu,j}\bp{X_{(G,\omega)}} = \begin{cases}
(-1)^{a+b+\mu}(-1)^{j}\cdot 2\binom{a}{j} & \text{ if }\mu=a=b \\
(-1)^{a+b+\mu}(-1)^{j}\binom{a}{j}-(-1)^{a+b+\mu}|S_{b,\mu,j}| & \text{ if }a<b\text{ and }\mu\in\{a,b\} \\
-(-1)^{a+b+\mu}|S_{a,\mu,j}|-(-1)^{a+b+\mu}|S_{b,\mu,j}| & \text{ if }\mu<a\text{ or }\mu>b \end{cases} \]
On the other hand, expanding the right-hand side of \eqref{one-edge-theorem-equation}, we note that there are two acyclic orientations for $G$, one with $v_1\to v_2$ and the other with $v_2\to v_1$. Is straightforward to verify by casework that \eqref{one-edge-theorem-equation} evaluates to \[ (-1)^{a+b} \sum_{\substack{\swts(G,\gamma,S)=(\mu,j) \\ S\text{ admissible}}}\sgn(G,\gamma,S)= \begin{cases}
(-1)^{a+b}(-1)^{(\mu-1)+(j-1)}\cdot 2\binom{a}{j} & \text{ if }\mu=a=b \\
(-1)^{a+b}\bp{(-1)^{(\mu-1)+(j-1)}\binom{a}{j}+(-1)^{\mu-1}|S_{b,\mu,j}|} & \text{ if }a<b\text{ and }\mu\in\{a,b\} \\
(-1)^{a+b}\bp{(-1)^{\mu-1}|S_{a,\mu,j}|+(-1)^{\mu-1}|S_{b,\mu,j}|} & \text{ if }\mu<a\text{ or }\mu>b \end{cases} \] Note that for the middle case, we need to apply parts (3) and (4) of Definition \ref{def:drop} and count the $j$ second-level sinks from those of the vertex of lower weight. This concludes the proof. \end{proof}
\section{Concluding Remarks}
The introduction of Conjecture \ref{conjecture} is something of a break from current trends in the research of $e$-basis expansions of chromatic symmetric functions. As far as the authors are aware, the conjectured weight-drop phenomenon is previously unknown, and proving the conjecture and generalizing it to further $\mu$ would provide a novel method for attacking the Stanley-Stembridge conjecture. In particular, in light of the discussion in Section 4.2 (and as noted in the introduction), such a generalized conjecture could admit every integer partition $\mu$ as $s$-allowable in unweighted claw-free graphs (or at least a sufficiently large subset of them), and so could potentially give any individual $e$-basis coefficient for such graphs. \section{Acknowledgments}
The authors would like to thank Sophie Spirkl for helpful discussions and for the simple proof of Lemma \ref{lem:clawfree}.
\end{document} | arXiv |
\begin{definition}[Definition:Normal Subset]
Let $\left({G, \circ}\right)$ be a group.
Let $S \subseteq G$ be a general subset of $G$.
Then $S$ is a '''normal subset of $G$''' {{iff}}:
\end{definition} | ProofWiki |
\begin{document}
\title {Plethora of cluster structures on $GL_n$}
\author{M. Gekhtman}
\address{Department of Mathematics, University of Notre Dame, Notre Dame, IN 46556} \email{[email protected]}
\author{M. Shapiro} \address{Department of Mathematics, Michigan State University, East Lansing, MI 48823} \email{[email protected]}
\author{A. Vainshtein} \address{Department of Mathematics \& Department of Computer Science, University of Haifa, Haifa, Mount Carmel 31905, Israel} \email{[email protected]}
\begin{abstract} We continue the study of multiple cluster structures in the rings of regular functions on $GL_n$, $SL_n$ and $\operatorname{Mat}_n$ that are compatible with Poisson—-Lie and Poisson-homogeneous structures. According to our initial conjecture, each class in the Belavin--Drinfeld classification of Poisson--Lie structures on semisimple complex group ${\mathcal G}$ corresponds to a cluster structure in ${\mathcal O}({\mathcal G})$. Here we prove this conjecture for a large subset of Belavin--Drinfeld (BD) data of $A_n$ type, which includes all the previously known examples. Namely, we subdivide all possible $A_n$ type BD data into oriented and non-oriented kinds. In the oriented case, we single out BD data satisfying a certain combinatorial condition that we call aperiodicity and prove that for any BD data of this kind there exists a regular cluster structure compatible with the corresponding Poisson--Lie bracket. In fact, we extend the aperiodicity condition to pairs of oriented BD data and prove a more general result that establishes an existence of a regular cluster structure on $SL_n$ compatible with a Poisson bracket homogeneous with respect to the right and left action of two copies of $SL_n$ equipped with two different Poisson-Lie brackets. If the aperiodicity condition is not satisfied, a compatible cluster structure has to be replaced with a generalized cluster structure. We will address this situation in future publications. \end{abstract}
\subjclass[2010]{53D17,13F60} \keywords{Poisson--Lie group, cluster algebra, Belavin--Drinfeld triple}
\maketitle
\tableofcontents
\section{Introduction}
In this paper we continue the systematic study of multiple cluster structures in the rings of regular functions on $GL_n$, $SL_n$ and $\operatorname{Mat}_n$ started in \cite{GSVM, GSVPNAS, GSVMem}. It follows an approach developed and implemented in \cite{GSV1, GSV2, GSVb} for constructing cluster structures on algebraic varieties.
Recall that given a complex algebraic Poisson variety $\left({\mathcal M}, {\{\cdot,\cdot\}}\right)$, a compatible cluster structure ${\mathcal C}_{\mathcal M}$ on ${\mathcal M}$ is a collection of coordinate charts (called clusters) comprised of regular functions with simple birational transition maps between charts (called cluster transformations, see \cite{FZ1}) such that the logarithms of any two functions in the same chart have a constant Poisson bracket. Once found, any such chart can be used as a starting point, and our construction allows us to restore the whole ${\mathcal C}_{\mathcal M}$, provided the arising birational maps preserve regularity. Algebraic structures corresponding to ${\mathcal C}_{\mathcal M}$ (the cluster algebra and the upper cluster algebra) are closely related to the ring ${\mathcal O}({\mathcal M})$ of regular functions on $\mathcal{M}$. In fact, under certain rather mild conditions, ${\mathcal O}({\mathcal M})$ can be obtained by tensoring the upper cluster algebra with ${\mathbb C}$, see \cite{GSVb}.
This construction was applied in \cite[Ch.~4.3]{GSVb} to double Bruhat cells in semisimple Lie groups equipped with (the restriction of) the {\em standard\/} Poisson--Lie structure. It was shown that the resulting cluster structure coincides with the one built in \cite{BFZ}. The standard Poisson--Lie structure is a particular case of Poisson--Lie structures corresponding to quasi-triangular Lie bialgebras. Such structures are associated with solutions to the classical Yang--Baxter equation. Their complete classification was obtained by Belavin and Drinfeld in \cite{BD}. Solutions are parametrized by the data that consists of a continuous and a discrete components. The latter, called the Belavin--Drinfeld triple, is defined in terms of the root system of the Lie algebra of the corresponding semisimple Lie group. In \cite{GSVM} we conjectured that any such solution gives rise to a compatible cluster structure on this Lie group. This conjecture was verified in \cite{Eis} for $SL_5$ and proved in \cite{Eis1, Eis2} for the simplest non-trivial Belavin--Drinfeld triple in $SL_n$ and in \cite{GSVMem} for the Cremmer--Gervais case.
In this paper we extend these results to a wide class of Belavin--Drinfeld triples in $SL_n$. We define a subclass of {\em oriented\/} triples, see Section \ref{sec:combdata}, and encode the corresponding information in a combinatorial object called a Belavin--Drinfeld graph. Our main result claims that the conjecture of \cite{GSVM} holds true whenever the corresponding Belavin--Drinfeld graph is acyclic. In this case the structure of the Belavin--Drinfeld graph is mirrored in the explicit construction of the initial cluster. In fact, we have proved a stronger result: given two oriented Belavin--Drinfeld triples in $SL_n$ we define the graph of the pair, and if this graph possesses a certain acyclicity property then the Poisson bracket defined by the pair (note that it is not Poisson--Lie anymore) gives rise to a compatible cluster structure on $SL_n$.
If the Belavin--Drinfeld graph has cycles then the conjecture of \cite{GSVM} needs to be modified: one has to consider generalized cluster structures instead of the ordinary ones. We will address Belavin--Drinfeld graphs with cycles in a separate publication.
In \cite{GY}, Goodearl and Yakimov developed a uniform approach for constructing cluster algebra structures in symmetric Poisson nilpotent algebras using sequences of Poisson-prime elements in chains of Poisson unique factorization domains. These results apply to a large class of Poisson varieties, e.g., Schubert cells in Kac--Moody groups viewed as Poisson subvarieties with respect to the standard Poisson-Lie bracket. It is worth pointing out, however, that the approach of \cite{GY}, in its current form, does not seem to be applicable to the situation we consider here. This is evident from the fact that for cluster structures constructed in \cite{GY}, the cluster algebra and the corresponding upper cluster algebra always coincide. In contrast, as we have shown in \cite{GSVPNAS}, the simplest non-trivial Belavin--Drinfreld data in $SL_3$ results in a strict inclusion of the cluster algebra into the upper cluster algebra.
The paper is organized as follows. Section \ref{sec:prelim} contains a concise description of necessary definitions and results on cluster algebras and Poisson--Lie groups. Section~\ref{sec:mainres} presents main constructions and results. The Belavin--Drinfeld graph and related combinatorial data are defined in Section~\ref{sec:combdata}. The same section contains the formulations of the main Theorems~\ref{mainth} and~\ref{genmainth}. An explicit construction of the initial cluster is contained in Section~\ref{thebasis} and summarized in Theorem~\ref{logcanbasis}. Section~\ref{sec:basis} is dedicated to the proof of this theorem. The quiver that together with the initial cluster defines the compatible cluster structure is built in Section \ref{thequiver}, see Theorem~\ref{quiver} whose proof is contained in Section~\ref{sec:quiver}. Section \ref{outline} outlines the proof of the main Theorems~\ref{mainth} and~\ref{genmainth}. It contains, inter alia, Theorem~\ref{prototype} that enables us to implement the induction step in the proof of an isomorphism between the constructed upper cluster algebra and the ring of regular functions on $\operatorname{Mat}_n$. A detailed constructive proof of this isomorphism is the subject of Section~\ref{sec:induction}. Section~\ref{sec:regtor} is devoted to showing that cluster structures we constructed are regular and admit a global toric action.
Our research was supported in part by the NSF research grants DMS \#1362801 and DMS \#1702054 (M.~G.), NSF research grants DMS \#1362352 and DMS-1702115 (M.~S.), and ISF grants \#162/12 and \#1144/16 (A.~V.). While working on this project, we benefited from support of the following institutions and programs: Universit\'e Claude Bernard Lyon 1 (M.~S., Spring 2016), University of Notre Dame (A.~V., Spring 2016), Research in Pairs Program at the Mathematisches Forschungsinstitut Oberwolfach (M.~G., M.~S., A.~V., Summer 2016), Max Planck Institute for Mathematics, Bonn (M.~G.~and A.~V., Fall 2016), Bernoulli Brainstorm Program at EPFL, Lausanne (M.~G.~and A.~V., Summer 2017), Research in Paris Program at the Institut Henri Poincar\'e (M.~G., M.~S., A.~V., Fall 2017), Institute Des Hautes \'Etudes Scientifiques in (M.~G.~and A.~V., Fall 2017), Mathematical Institute of the University of Heidelberg (M.~G., Spring 2017 and Summer 2018), Michigan State University (A.~V., Fall 2018). This paper was finished during the joint visit of the authors to the University of Notre Dame Jerusalem Global Gateway and the University of Haifa in December 2018. We are grateful to all these institutions for their hospitality and outstanding working conditions they provided. Special thanks are due to Salvatore Stella who pointed to a mistake in the original proof of Theorem \ref{logcanbasis} and to Gus Schrader, Alexander Shapiro and Milen Yakimov for valuable discussions.
\section{Preliminaries} \label{sec:prelim}
\subsection{Cluster structures of geometric type and compatible Poisson brackets} \label{sec:cluster} Let ${\mathcal F}$ be the field of rational functions in $N+M$ independent variables with rational coefficients. There are $M$ distinguished variables; they are denoted $x_{N+1},\dots,x_{N+M}$ and called {\em frozen\/}, or {\em stable\/}. The $(N+M)$-tuple ${\bf x}=(x_1,\dots,\allowbreak x_{N+M})$ is called a {\em cluster\/}, and its elements $x_1,\dots,x_N$ are called {\em cluster variables\/}. The {\it quiver\/} $Q$ is a directed multigraph on the vertices $1,\dots,N+M$ corresponding to all variables; the vertices corresponding to frozen variables are called frozen. An edge going from a vertex $i$ to a vertex $j$ is denoted $i\to j$. The pair $\Sigma=({\bf x},Q)$ is called a {\em seed}.
Given a seed as above, the {\em adjacent cluster\/} in direction $k$, $1\le k\le N$, is defined by ${\bf x}'=({\bf x}\setminus\{x_k\})\cup\{x'_k\}$, where the new cluster variable $x'_k$ is given by the {\em exchange relation} \begin{equation*} x_kx'_k=\prod_{k\to i}x_i+\prod_{i\to k}x_i. \end{equation*}
The {\em quiver mutation\/} of $Q$ in direction $k$ is given by the following three steps: (i) for any two-edge path $i\to k\to j$ in $Q$, $e(i,j)$ edges $i\to j$ are added, where $e(i,j)$ is the number of two-edge paths $i\to k\to j$; (ii) every edge $j\to i$ (if it exists) annihilates with an edge $i\to j$; (iii) all edges $i\to k$ and all edges $k \to i$ are reversed. The resulting quiver is denoted $Q'=\mu_k(Q)$. It is sometimes convenient to represent the quiver by an $N\times(N+M)$ integer matrix $B=B(Q)$ called the {\it exchange matrix}, where $b_{ij}$ is the number of arrows $i\to j$ in $Q$. Note that the principal part of $B$ is skew-symmetric (recall that the principal part of a rectangular matrix is its maximal leading square submatrix).
Given a seed $\Sigma=({\bf x},Q)$, we say that a seed $\Sigma'=({\bf x}',Q')$ is {\em adjacent\/} to $\Sigma$ (in direction $k$) if ${\bf x}'$ is adjacent to ${\bf x}$ in direction $k$ and $Q'=\mu_k(Q)$. Two seeds are {\em mutation equivalent\/} if they can be connected by a sequence of pairwise adjacent seeds. The set of all seeds mutation equivalent to $\Sigma$ is called the {\it cluster structure\/} (of geometric type) in ${\mathcal F}$ associated with $\Sigma$ and denoted by ${\mathcal C}(\Sigma)$; in what follows, we usually write just ${\mathcal C}$ instead.
Let ${\mathbb A}$ be a {\em ground ring\/} satisfying the condition \[ {\mathbb Z}[x_{N+1},\dots,x_{N+M}]\subseteq{\mathbb A}\subseteq{\mathbb Z}[x_{N+1}^{\pm1},\dots,x_{N+M}^{\pm1}] \] (we write $x^{\pm1}$ instead of $x,x^{-1}$). Following \cite{FZ1, BFZ}, we associate with ${\mathcal C}$ two algebras of rank $N$ over ${\mathbb A}$: the {\em cluster algebra\/} ${\mathcal A}={\mathcal A}({\mathcal C})$, which is the ${\mathbb A}$-subalgebra of ${\mathcal F}$ generated by all cluster variables in all seeds in ${\mathcal C}$, and the {\it upper cluster algebra\/} $\overline{\A}=\overline{\A}({\mathcal C})$, which is the intersection of the rings of Laurent polynomials over ${\mathbb A}$ in cluster variables taken over all seeds in ${\mathcal C}$. The famous {\it Laurent phenomenon\/} \cite{FZ2} claims the inclusion ${\mathcal A}({\mathcal C})\subseteq\overline{\A}({\mathcal C})$. Note that originally upper cluster algebras were defined over the ring of Laurent polynomials in frozen variables. In \cite{GSVD} we proved that upper cluster algebras over subrings of this ring retain all properties of usual upper cluster algebras. In what follows we assume that the ground ring is the polynomial ring in frozen variables, unless explicitly stated otherwise.
Let $V$ be a quasi-affine variety over ${\mathbb C}$, ${\mathbb C}(V)$ be the field of rational functions on $V$, and ${\mathcal O}(V)$ be the ring of regular functions on $V$. Let ${\mathcal C}$ be a cluster structure in ${\mathcal F}$ as above. Assume that $\{f_1,\dots,f_{N+M}\}$ is a transcendence basis of ${\mathbb C}(V)$. Then the map $\varphi: x_i\mapsto f_i$, $1\le i\le N+M]$, can be extended to a field isomorphism $\varphi: {\mathcal F}_{\mathbb C}\to {\mathbb C}(V)$, where ${\mathcal F}_{\mathbb C}={\mathcal F}\otimes{\mathbb C}$ is obtained from ${\mathcal F}$ by extension of scalars. The pair $({\mathcal C},\varphi)$ is called a cluster structure {\it in\/} ${\mathbb C}(V)$ (or just a cluster structure {\it on\/} $V$), $\{f_1,\dots,f_{N+M}\}$ is called a cluster in
$({\mathcal C},\varphi)$. Occasionally, we omit direct indication of $\varphi$ and say that ${\mathcal C}$ is a cluster structure on $V$. A cluster structure $({\mathcal C},\varphi)$ is called {\it regular\/} if $\varphi(x)$ is a regular function for any cluster variable $x$. The two algebras defined above have their counterparts in ${\mathcal F}_{\mathbb C}$ obtained by extension of scalars; they are denoted ${\mathcal A}_{\mathbb C}$ and $\overline{\A}_{\mathbb C}$. If, moreover, the field isomorphism $\varphi$ can be restricted to an isomorphism of ${\mathcal A}_{\mathbb C}$ (or $\overline{\A}_{\mathbb C}$) and ${\mathcal O}(V)$, we say that ${\mathcal A}_{\mathbb C}$ (or $\overline{\A}_{\mathbb C}$) is {\it naturally isomorphic\/} to ${\mathcal O}(V)$.
Let ${\{\cdot,\cdot\}}$ be a Poisson bracket on the ambient field ${\mathcal F}$, and ${\mathcal C}$ be a cluster structure in ${\mathcal F}$. We say that the bracket and the cluster structure are {\em compatible\/} if, for any cluster ${\bf x}=(x_1,\dots,x_{N+M})$, one has $\{x_i,x_j\}=\omega_{ij} x_ix_j$, where $\omega_{ij}\in{\mathbb Q}$ are constants for all $1\le i,j\le N+M$. The matrix $\Omega^{{\bf x}}=(\omega_{ij})$ is called the {\it coefficient matrix\/} of ${\{\cdot,\cdot\}}$ (in the basis ${\bf x}$); clearly, $\Omega^{{\bf x}}$ is skew-symmetric. The notion of compatibility extends to Poisson brackets on ${\mathcal F}_{\mathbb C}$ without any changes.
Fix an arbitrary cluster ${\bf x}=(x_1,\dots,x_{N+M})$ and define a {\it local toric action\/} of rank $s$ at ${\bf x}$ as a map \begin{equation} {\bf x}\mapsto \left ( x_i \prod_{\alpha=1}^s q_\alpha^{w_{i\alpha}}\right )_{i=1}^{N+M},\qquad {\bf q}=(q_1,\dots,q_s)\in ({\mathbb C}^*)^s, \label{toricact} \end{equation} where $W=(w_{i\alpha})$ is an integer $(N+M)\times s$ {\it weight matrix\/} of full rank. Let ${\bf x}'$ be another cluster in ${\mathcal C}$, then the corresponding local toric action defined by the weight matrix $W'$ is {\it compatible\/} with the local toric action \eqref{toricact} if it commutes with the sequence of cluster transformations that takes ${\bf x}$ to ${\bf x}'$.
If local toric actions at all clusters are compatible, they define a {\it global toric action\/} on ${\mathcal C}$ called the ${\mathcal C}$-extension of the local toric action \eqref{toricact}.
\subsection{Poisson--Lie groups} A reductive complex Lie group ${\mathcal G}$ equipped with a Poisson bracket ${\{\cdot,\cdot\}}$ is called a {\em Poisson--Lie group\/} if the multiplication map ${\mathcal G}\times {\mathcal G} \ni (X,Y) \mapsto XY \in {\mathcal G}$ is Poisson. Perhaps, the most important class of Poisson--Lie groups is the one associated with quasitriangular Lie bialgebras defined in terms of {\em classical R-matrices\/} (see, e.~g., \cite[Ch.~1]{CP}, \cite{r-sts} and \cite{Ya} for a detailed exposition of these structures).
Let $\mathfrak g$ be the Lie algebra corresponding to ${\mathcal G}$ and $\langle \cdot,\cdot\rangle$ be an invariant nondegenerate form on $\mathfrak g$. A classical R-matrix is an element $r\in \mathfrak g\otimes\mathfrak g$ that satisfies the {\em classical Yang--Baxter equation} ({\it CYBE\/}). The Poisson--Lie bracket on ${\mathcal G}$ that corresponds to $r$ can be written as \begin{equation}\label{sklyabra} \begin{aligned} \{f^1,f^2\}_r &= \langle R_+(\nabla^L f^1), \nabla^L f^2 \rangle - \langle R_+(\nabla^R f^1), \nabla^R f^2 \rangle\\ &= \langle R_-(\nabla^L f^1), \nabla^L f^2 \rangle - \langle R_-(\nabla^R f^1), \nabla^R f^2 \rangle, \end{aligned} \end{equation} where $R_+,R_- \in \operatorname{End} \mathfrak g$ are given by $\langle R_+ \eta, \zeta\rangle = \langle r, \eta\otimes\zeta \rangle$, $-\langle R_- \zeta, \eta\rangle = \langle r, \eta\otimes\zeta \rangle$ for any $\eta,\zeta\in \mathfrak g$ and $\nabla^L$, $\nabla^R$ are the right and the left gradients of functions on ${\mathcal G}$ with respect to $\langle \cdot,\cdot\rangle$ defined by \begin{equation*}
\left\langle \nabla^R f(X),\xi\right\rangle=\left.\frac d{dt}\right|_{t=0}f(Xe^{t\xi}), \quad
\left\langle \nabla^L f(X),\xi\right\rangle=\left.\frac d{dt}\right|_{t=0}f(e^{t\xi}X) \end{equation*} for any $\xi\in\mathfrak g$, $X\in{\mathcal G}$.
Following \cite{r-sts}, let us recall the construction of the {\em Drinfeld double}. First, note that CYBE implies that \begin{equation}\label{g_pm} \mathfrak g_+={\operatorname {Im}}(R_+),\qquad \mathfrak g_-={\operatorname {Im}}(R_-) \end{equation} are subalgebras in $\mathfrak g$. The double of $\mathfrak g$ is $D(\mathfrak g)=\mathfrak g \oplus \mathfrak g$ equipped with an invariant nondegenerate bilinear form $$ \langle\langle (\xi,\eta), (\xi',\eta')\rangle\rangle = \langle \xi, \xi'\rangle - \langle \eta, \eta'\rangle. $$ Define subalgebras ${\mathfrak d}_\pm$ of $D(\mathfrak g)$ by \begin{equation}\label{ddeco} {\mathfrak d}_+=\{( \xi,\xi){:\ } \xi \in\mathfrak g\}, \quad {\mathfrak d}_-=\{ (R_+(\xi),R_-(\xi)){:\ } \xi \in\mathfrak g\}, \end{equation} then ${\mathfrak d}_\pm$ are isotropic subalgebras of $D(\mathfrak g)$ and $D(\mathfrak g)= {\mathfrak d}_+ \dot + {\mathfrak d}_-$. In other words, $(D(\mathfrak g), {\mathfrak d}_+, {\mathfrak d}_-)$ is {\em a Manin triple}. Then the operator $R_D= \pi_{{\mathfrak d}_+} - \pi_{{\mathfrak d}_-}$ can be used to define a Poisson--Lie structure on $D({\mathcal G})={\mathcal G}\times {\mathcal G}$, the double of the group ${\mathcal G}$, via \begin{equation} \{f^1,f^2\}^D_r = \frac{1}{2}\left (\langle\langle R_D({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^L f^1), {\raisebox{2pt}{$\bigtriangledown$}}\negthinspace{^L} f^2 \rangle\rangle - \langle\langle R_D({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R f^1), {\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R f^2 \rangle\rangle \right), \label{sklyadouble} \end{equation} where ${\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R$ and ${\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^L$ are right and left gradients with respect to $\langle\langle \cdot ,\cdot \rangle\rangle$. Restriction of this bracket to ${\mathcal G}$ identified with the diagonal subgroup of $D({\mathcal G})$ (whose Lie algebra is ${\mathfrak d}_+$) coincides with the Poisson--Lie bracket ${\{\cdot,\cdot\}}_r$ on ${\mathcal G}$. Let ${\mathcal D}_-$ be the subgroup of $D({\mathcal G})$ that corresponds to ${\mathfrak d}_-$ Double cosets of ${\mathcal D}_-$ in $D({\mathcal G})$ play an important role in the description of symplectic leaves in Poisson--Lie groups ${\mathcal G}$ and $D({\mathcal G})$, see \cite{Ya}.
The classification of classical R-matrices for simple complex Lie groups was given by Belavin and Drinfeld in \cite{BD}. Let ${\mathcal G}$ be a simple complex Lie group, $\Phi$ be the root system associated with its Lie algebra $\mathfrak g$, $\Phi^+$ be the set of positive roots, and $\Pi\subset \Phi^+$ be the set of positive simple roots. A {\em Belavin--Drinfeld triple} $\mathbf \Gamma=(\Gamma_1,\Gamma_2, \gamma)$ (in what follows, a {\em BD triple\/}) consists of two subsets $\Gamma_1,\Gamma_2$ of $\Pi$ and an isometry $\gamma{:\ }\Gamma_1\to\Gamma_2$ nilpotent in the following sense: for every $\alpha \in \Gamma_1$ there exists $m\in\mathbb{N}$ such that $\gamma^j(\alpha)\in \Gamma_1$ for $j\in [0,m-1]$, but $\gamma^m(\alpha)\notin \Gamma_1$.
The isometry $\gamma$ yields an isomorphism, also denoted by $\gamma$, between Lie subalgebras $\mathfrak g_{\Gamma_1}$ and $\mathfrak g_{\Gamma_2}$ that correspond to $\Gamma_1$ and $\Gamma_2$. It is uniquely defined by the property $\gamma e_\alpha = e_{\gamma(\alpha)}$ for $\alpha\in \Gamma_1$, where $e_\alpha$ is the Chevalley generator corresponding to the the root $\alpha$. The isomorphism $\gamma^*{:\ } \mathfrak g_{\Gamma_2} \to \mathfrak g_{\Gamma_1}$ is defined as the adjoint to $\gamma$ with respect to the form $\langle \cdot,\cdot\rangle$. It is given by $\gamma^* e_{\gamma(\alpha)}=e_{\alpha}$ for $\gamma(\alpha)\in \Gamma_2$.
Both $\gamma$ and $\gamma^*$ can be extended to maps of $\mathfrak g$ to itself by applying first the orthogonal projection on $\mathfrak g_{\Gamma_1}$ (respectively, on $\mathfrak g_{\Gamma_2}$) with respect to $\langle \cdot,\cdot\rangle$; clearly, the extended maps remain adjoint to each other. Note that the restrictions of $\gamma$ and $\gamma^*$ to the positive and the negative nilpotent subalgebras $\mathfrak n_+$ and $\mathfrak n_-$ of $\mathfrak g$ are Lie algebra homomorphisms of $\mathfrak n_+$ and $\mathfrak n_-$ to themselves, and $\gamma(e_{\pm\alpha})=0$ for all $\alpha\in\Pi\setminus\Gamma_1$.
By the classification theorem, each classical R-matrix is equivalent to an R-matrix from a {\it Belavin--Drinfeld class\/} defined by a BD triple $\mathbf \Gamma$. Following \cite{ESS}, we write down an expression for the members of this class: \begin{equation} \label{r-matrix} r = \frac 1 2 \Omega_\mathfrak h + s + \sum_{\alpha} e_{-\alpha}\otimes e_\alpha + \sum_{\alpha} e_{-\alpha}\wedge \frac \gamma {1-\gamma} e_\alpha; \end{equation} here the summation is over the set of all positive roots, $\Omega_\mathfrak h \in \mathfrak h \otimes \mathfrak h$ is given by $\Omega_\mathfrak h= \sum h_\alpha \otimes \hat{h}_\alpha$ where $\{h_\alpha\}$ is the standard basis of the Cartan subalgebra $\mathfrak h$, $\{\hat h_\alpha\}$ is the dual basis with respect to the restriction of $\langle \cdot,\cdot\rangle$ to $\mathfrak h$, and $s \in \mathfrak h \wedge \mathfrak h$ satisfies \begin{equation} \label{s-eq} \left ( (1-\gamma) \alpha \otimes \mathbf 1 \right ) (2 s) = \left ( (1+\gamma) \alpha \otimes \mathbf 1 \right ) \Omega_\mathfrak h \end{equation} for any $\alpha \in \Gamma_1$. Solutions to \eqref{s-eq} form a linear space of dimension $\frac{k_{\mathbf \Gamma}(k_{\mathbf \Gamma}-1)}2$
with $k_{\mathbf \Gamma}=|\Pi\setminus\Gamma_1|$. More precisely, define \begin{equation}\label{smalltorus} \mathfrak h_\mathbf \Gamma=\{ h\in\mathfrak h \ : \ \alpha(h)=\beta(h)\ \mbox{if}\ \gamma^j(\alpha)=\beta \quad\text{for some $j$}\}, \end{equation} then $\operatorname{dim}\mathfrak h_\mathbf \Gamma=k_\mathbf \Gamma$, and if $s'$ is a fixed solution of \eqref{s-eq}, then every other solution has a form $s=s' + s_0$, where $s_0$ is an arbitrary element of $\mathfrak h_\mathbf \Gamma\wedge\mathfrak h_\mathbf \Gamma$. The subalgebra $\mathfrak h_\mathbf \Gamma$ defines a torus $\mathcal H_\mathbf \Gamma=\exp \mathfrak h_\mathbf \Gamma$ in ${\mathcal G}$.
Let $\pi_{>}$, $\pi_{<}$ be projections of $\mathfrak g$ onto $\mathfrak n_+$ and $\mathfrak n_-$, $\pi_\mathfrak h$ be the projection onto $\mathfrak h$. It follows from \eqref{r-matrix} that $R_+$ in \eqref{sklyabra} is given by \begin{equation} \label{RplusSL} R_+=\frac1{1-\gamma}\pi_{>}-\frac{\gamma^*}{1-\gamma^*}\pi_{<}+ \left (\frac 1 2 + S\right )\pi_\mathfrak h, \end{equation} where $S \in \operatorname{End} \mathfrak h$ is skew-symmetric with respect to the restriction of $\langle \cdot,\cdot\rangle$ to $\mathfrak h$ and satisfies $\langle S h, h' \rangle = \langle s, h \otimes h'\rangle$ for any $h,h' \in \mathfrak h$ and conditions \begin{equation} \label{S-eq}
S (1-\gamma) h_\alpha = \frac 1 2(1+\gamma) h_\alpha \end{equation} for any $\alpha \in \Gamma_1$, translated from \eqref{s-eq}.
For an R-matrix given by \eqref{r-matrix}, subalgebras $\mathfrak g_\pm$ from \eqref{g_pm} are contained in parabolic subalgebras ${\mathfrak p}_\pm$ of $\mathfrak g$ determined by the BD triple: ${\mathfrak p}_+$ contains $\mathfrak b_+$ and all the negative root spaces in $\mathfrak g_{\Gamma_1}$, while ${\mathfrak p}_-$ contains $\mathfrak b_-$ and all the positive root spaces in $\mathfrak g_{\Gamma_2}$. Then one has \begin{equation} \label{parabolics} {\mathfrak p}_+ = \mathfrak g_+ \oplus \mathfrak h_+, \qquad {\mathfrak p}_- = \mathfrak g_- \oplus \mathfrak h_- \end{equation} with $\mathfrak h_\pm \subset \mathfrak h$. An explicit description of subalgebras $\mathfrak h_\pm$ can be found, e.g., in \cite[Sect.~3.1]{Ya}. Let ${\mathfrak l}_\pm$ denote the Levi component of ${\mathfrak p}_\pm $. Then ${\mathfrak l}_+=\mathfrak g_{\Gamma_1}$, ${\mathfrak l}_-=\mathfrak g_{\Gamma_2}$, and the Lie algebra isomorphism $\gamma$ described above restricts to ${\mathfrak l}_+ \cap \mathfrak g_+ \to {\mathfrak l}_- \cap \mathfrak g_-$. This allows to describe the subalgebra ${\mathfrak d}_-$ as \begin{multline}\label{d_-} {\mathfrak d}_-=\{ (\xi_+,\xi_-)){:\ } \xi_\pm \in\mathfrak g_\pm,\ \gamma(\pi_{{\mathfrak l}_+ \cap \mathfrak g_+} \xi_+)= \pi_{{\mathfrak l}_- \cap \mathfrak g_-}\xi_- \}\\ \subset \{ (\xi_+,\xi_-)){:\ } \xi_\pm \in\mathfrak g_\pm,\ \gamma(\pi_{{\mathfrak l}_+} \xi_+)= \pi_{{\mathfrak l}_-}\xi_- \}, \end{multline} where $\pi_\cdot$ are the projections to the corresponding subalgebras.
In what follows we will use a Poisson bracket on ${\mathcal G}$ that is a generalization of the bracket \eqref{sklyabra}. Let $r, r'$ be two classical R-matrices, and $R_+, R'_+$ be the corresponding operators, then we write \begin{equation} \{f^1,f^2\}_{r,r'} = \langle R_+(\nabla^L f^1), \nabla^L f^2 \rangle - \langle R'_+(\nabla^R f^1), \nabla^R f^2 \rangle. \label{sklyabragen} \end{equation} By \cite[Proposition 12.11]{r-sts}, the above expression defines a Poisson bracket, which is not Poisson--Lie unless $r=r'$, in which case $\{f^1,f^2\}_{r,r}$ evidently coincides with $\{f^1,f^2\}_{r}$. The bracket \eqref{sklyabragen} defines a Poisson homogeneous structure on ${\mathcal G}$ with respect to the left and right multiplication by Poisson--Lie groups $({\mathcal G},{\{\cdot,\cdot\}}_r)$ and $({\mathcal G},{\{\cdot,\cdot\}}_{r'})$, respectively. The bracket on the Drinfeld double that corresponds to $\{f^1,f^2\}_{r,r'}$ is defined similarly to \eqref{sklyadouble} via \begin{equation} \{f^1,f^2\}^D_{r,r'} = \frac{1}{2}\left (\langle\langle R_D({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^L f^1), {\raisebox{2pt}{$\bigtriangledown$}}\negthinspace{^L} f^2 \rangle\rangle - \langle\langle R'_D({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R f^1), {\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R f^2 \rangle\rangle \right). \label{sklyadoublegen} \end{equation}
\section{Main results and the outline of the proof}\label{sec:mainres}
\subsection{Combinatorial data and main results} \label{sec:combdata} In this paper, we only deal with $\mathfrak g = \mathfrak {sl}_n$, and hence $\Gamma_1$ and $\Gamma_2$ can be identified with subsets of $[1,n-1]$. We assume that $\mathbf \Gamma$ is {\it oriented\/}, that is, $i,i+1\in\Gamma_1$ implies $\gamma(i+1)=\gamma(i)+1$.
For any $i\in [1,n]$ put \[ i_+=\min\{j\in [1,n]\setminus\Gamma_1{:\ } \ j\ge i\}, \qquad i_-=\max\{j\in [0,n]\setminus\Gamma_1{:\ } \ j<i\}. \] The interval $\Delta(i)=[i_-+1,i_+]$ is called the {\it $X$-run\/} of $i$. Clearly, all distinct $X$-runs form a partition of $[1,n]$. The $X$-runs are numbered consecutively from left to right. For example, let $n=7$ and $\Gamma_1=\{1,2,4\}$, then there are four $X$-runs: $\Delta_1=[1,3]$, $\Delta_2=[4,5]$, $\Delta_3=[6,6]$ and $\Delta_4=[7,7]$. Clearly, $\Delta(2)=\Delta_1$, $\Delta(4)=\Delta_2$, etc.
In a similar way, $\Gamma_2$ defines another partition of $[1,n]$ into $Y$-runs $\bar\Delta(i)$. For example, let in the above example $\Gamma_2=\{1,3,4\}$, then $\bar\Delta_1=[1,2]$, $\bar\Delta_2=[3,5]$, $\bar\Delta_3=[6,6]$ and $\bar\Delta_4=[7,7]$.
Runs of length one are called trivial. The map $\gamma$ induces a bijection on the sets of nontrivial $X$-runs and $Y$-runs: we say that $\bar\Delta_i=\gamma(\Delta_j)$ if there exists $k\in\Delta_j$ such that $\bar\Delta(\gamma(k))=\bar\Delta_i$. The inverse of the bijection $\gamma$ is denoted $\gamma^*$ (the reasons for this notation will become clear later). Let in the previous example $\gamma(1)=3, \gamma(2)=4, \gamma(4)=1$, then $\bar\Delta_1=\gamma(\Delta_2)$ and $\bar\Delta_2=\gamma(\Delta_1)$.
The {\it BD graph\/} $G_\mathbf \Gamma$ is defined as follows. The vertices of $G_\mathbf \Gamma$ are two copies of the set of positive simple roots identified with $[1,n-1]$. One of the sets is called the {\it upper\/} part of the graph, and the other is called the {\it lower\/} part. A vertex $i\in\Gamma_1$ is connected with an {\it inclined\/} edge to the vertex $\gamma(i)\in\Gamma_2$. Finally, vertices $i$ and $n-i$ in the same part are connected with a {\it horizontal\/} edge. If $n=2k$ and $i=n-i=k$, the corresponding horizontal edge is a loop. The BD graph for the above example is shown in~Fig.~\ref{fig:BDgraph} on the left. In the same figure on the right one finds the BD graph for the case of $SL_6$ with $\Gamma_1=\{1,3,4\}$, $\Gamma_2=\{2,4,5\}$ and $\gamma{:\ } i\mapsto i+1$.
\begin{figure}
\caption{BD graphs for aperiodic BD triples}
\label{fig:BDgraph}
\end{figure}
Clearly, there are four possible types of connected components in $G_\mathbf \Gamma$: a path, a path with a loop, a path with two loops, and a cycle. We say that a BD triple $\mathbf \Gamma$ is {\em aperiodic\/} if each component in $G_\mathbf \Gamma$ is either a path or a path with a loop, and {\em periodic\/} otherwise. In what follows we assume that $\mathbf \Gamma$ is aperiodic. The case of periodic BD triples will be addressed in a separate paper.
\begin{remark} \label{milen} Let $w_0$ be the longest permutation in $S_n$. Observe that horizontal edges in both rows of the BD graph can be seen as a depiction of the action of $\left ( -w_0\right)$ on the set of positive simple roots of $SL_n$. Thus the BD graph can be used to analyze the properties of the map $w_0 \gamma w_0 \gamma^{-1}$. A map of this kind, with the pair $(w_0, w_0)$ replaced by a pair of elements of the Well group satisfying certain properties dictated by the BD triple in an arbitrary reductive Lie group, was defined in \cite[Sect.~5.1.1]{Ya} and utilized in the description of symplectic leaves of the corresponding Poisson--Lie structure. \end{remark}
The main result of this paper states that the conjecture formulated in \cite{GSVM} holds for oriented aperiodic BD triples in $SL_n$. Namely,
\begin{theorem} \label{mainth} For any oriented aperiodic Belavin--Drinfeld triple $\mathbf \Gamma=(\Gamma_1,\Gamma_2,\gamma)$ there exists a cluster structure ${\mathcal C}_\mathbf \Gamma$ on $SL_n$ such that
{\rm (i)} the number of frozen variables is $2k_\mathbf \Gamma$, and the corresponding exchange matrix has a full rank;
{\rm (ii)} ${\mathcal C}_\mathbf \Gamma$ is regular, and the corresponding upper cluster algebra $\overline{\A}_{\mathbb C}({\mathcal C}_\mathbf \Gamma)$ is naturally isomorphic to ${\mathcal O}(SL_n)$;
{\rm (iii)} the global toric action of $(\mathbb{C}^*)^{2k_\mathbf \Gamma}$ on ${\mathcal C}_\mathbf \Gamma$ is generated by the action of $\mathcal H_\mathbf \Gamma\times \mathcal H_\mathbf \Gamma$ on $SL_n$ given by $(H_1, H_2)(X) = H_1 X H_2$;
{\rm (iv)} for any solution of CYBE that belongs to the Belavin--Drinfeld class specified by $\mathbf \Gamma$, the corresponding Sklyanin bracket is compatible with ${\mathcal C}_\mathbf \Gamma$;
{\rm (v)} a Poisson--Lie bracket on $SL_n$ is compatible with ${\mathcal C}_\mathbf \Gamma$ only if it is a scalar multiple of the Sklyanin bracket associated with a solution of CYBE that belongs to the Belavin--Drinfeld class specified by $\mathbf \Gamma$. \end{theorem}
This result was established previously for the Cremmer--Gervais case (given by $\gamma: i\mapsto i+1$ for $1\le i\le n-2$) in \cite{GSVMem} and for all cases when $k_\mathbf \Gamma=n-2$ in \cite{Eis1, Eis2}.
In fact, the construction above is a particular case of a more general construction. Let $r^{\rm r}$ and $r^{\rm c}$ be two classical R-matrices that correspond to BD triples $\bfG^{{\rm r}}=(\Gamma_1^{\rm r},\Gamma_2^{\rm r}, {\gamma^\er})$ and $\bfG^{{\rm c}}=(\Gamma_1^{\rm c},\Gamma_2^{\rm c}, {\gamma^\ec})$, which we call the {\em row} and the {\em column} BD triples, respectively.
Assume that both $\bfG^{{\rm r}}$ and $\bfG^{{\rm c}}$ are oriented.
Similarly to the BD graph $G_{\mathbf \Gamma}$ for $\mathbf \Gamma$, one can define a graph $G_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ for the pair $(\bfG^{{\rm r}}, \bfG^{{\rm c}})$ as follows. Take $G_{\bfG^{{\rm r}}}$ with all inclined edges directed downwards and $G_{\bfG^{{\rm c}}}$ in which all inclined edges are directed upwards. Superimpose these graphs by identifying the corresponding vertices. In the resulting graph, for every pair of vertices $i, n -i$ in either top or bottom row there are two edges joining them. We give these edges opposite orientations. If $n$ is even, then we retain only one loop at each of the two vertices labeled $\frac{n}{2}$. The result is a directed graph $G_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ on $2(n-1)$ vertices. For example, consider the case of $GL_5$ with $\bfG^{{\rm r}}=\left(\{1,2\}, \{2,3\}, 1\mapsto 2, 2\mapsto 3\right)$ and $\bfG^{{\rm c}}=\left(\{1,2\}, \{3,4\}, 1\mapsto3, 2\mapsto4\right)$. The corresponding graph $G_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ is shown on the left in Fig.~\ref{fig:altpaths}. For horizontal edges, no direction is indicated, which means that they can be traversed in both directions. The graph shown on in Fig.~\ref{fig:altpaths} on the right corresponds to the case of $GL_8$ with $\bfG^{{\rm r}}=\left(\{2,6\}, \{3,7\}, 2\mapsto 3, 6\mapsto7 \right)$ and $\bfG^{{\rm c}}=\left(\{2,6\}, \{1,5\}, 6\mapsto1, 2\mapsto5\right)$.
A directed path in $G_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ is called {\em alternating\/} if horizontal and inclined edges in the path alternate. In particular, an edge is a (trivial) alternating path. An alternating path with coinciding endpoints and an even number of edges is called an {\em alternating cycle}. Similarly to the decomposition of $G_{\mathbf \Gamma}$ into connected components, we can decompose the edge set of $G_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ into a disjoint union of maximal alternating paths and alternating cycles. If the resulting collection contains no alternating cycles, we call the pair $(\bfG^{{\rm r}}, \bfG^{{\rm c}})$ {\em aperiodic\/}; clearly, $(\mathbf \Gamma,\mathbf \Gamma)$ is aperiodic if and only if $\mathbf \Gamma$ is aperiodic. For the graph on the left in Fig.~\ref{fig:altpaths}, the corresponding maximal paths are $41\bar 2\bar 3 14$, $32\bar 3\bar 2$, $\bar1\bar4 23$, and $\bar4\bar1$ (here vertices in the lower part are marked with a dash for better visualization). None of them is an alternating cycle, so the corresponding pair is aperiodic.
For the graph on the right in Fig.~\ref{fig:altpaths}, the path $62\bar3\bar5 26\bar7\bar1 6$ is an alternating cycle; the edges $\bar1\bar7$ and $\bar5\bar3$ are trivial alternating paths.
\begin{figure}\label{fig:altpaths}
\end{figure}
The following result generalizes the first two claims of Theorem \ref{mainth}
\begin{theorem} \label{genmainth} For any aperiodic pair of oriented Belavin--Drinfeld triples $(\bfG^{{\rm r}}, \bfG^{{\rm c}})$ there exists a cluster structure ${\mathcal C}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ on $SL_n$ such that
{\rm (i)} the number of frozen variables is $k_{\bfG^{{\rm r}}}+k_{\bfG^{{\rm c}}}$, and the corresponding exchange matrix has a full rank;
{\rm (ii)} ${\mathcal C}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ is regular, and the corresponding upper cluster algebra $\overline{\A}_{\mathbb C}({\mathcal C}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}})$ is naturally isomorphic to ${\mathcal O}(SL_n)$.
{\rm (iii)} the global toric action of $({\mathbb C}^*)^{k_\bfG^{{\rm r}}+k_\bfG^{{\rm c}}}$ on ${\mathcal C}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ is generated by the action of $\mathcal H_{\bfG^{{\rm r}}}\times \mathcal H_{\bfG^{{\rm c}}}$ on $SL_n$ given by $(H_1, H_2)(X) = H_1 X H_2$.
{\rm (iv)} for any pair of solutions of CYBE that belong to the Belavin--Drinfeld classes specified by $\bfG^{{\rm r}}$ and $\bfG^{{\rm c}}$, the corresponding bracket~\eqref{sklyabragen} is compatible with ${\mathcal C}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$;
{\rm (v)} a Poisson bracket on $SL_n$ is compatible with ${\mathcal C}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ only if it is a scalar multiple of the bracket~\eqref{sklyabragen} associated with a pair of solutions of CYBE that belong to the Belavin--Drinfeld classes specified by $\bfG^{{\rm r}}$ and $\bfG^{{\rm c}}$. \end{theorem}
Following the approach suggested in \cite{GSVMem}, we will construct a cluster structure on the space $\operatorname{Mat}_n$ of $n\times n$ matrices and derive the required properties of ${\mathcal C}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ from similar features of the latter cluster structure. Note that in the case of $GL_n$ we also obtain a regular cluster structure with the same properties, however, in this case the ring of regular functions on $GL_n$ is isomorphic to the localization of the upper cluster algebra with respect to $\det X$, which is equivalent to replacing the ground ring by the corresponding localization of the polynomial ring in frozen variables. In what follows we use the same notation ${\mathcal C}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ for all three cluster structures and indicate explicitly which one is meant when needed.
\subsection{The basis}\label{thebasis}
Consider connected components of $G_\mathbf \Gamma$ for an aperiodic $\mathbf \Gamma$. The choice of the endpoint of a component induces directions of its edges: the first edge is directed from the endpoint, the second one from the head of the first one, and so on. Note that for a path with a loop, each edge except for the loop gets two opposite directions. Consequently, the choice of an endpoint of a component defines a matrix built of blocks curved out from two $n\times n$ matrices of indeterminates $X=(x_{ij})$ and $Y=(y_{ij})$. Each block is defined by a horizontal directed edge, that is, an edge whose head and tail belong to the same part of the graph. The block corresponding to a horizontal edge $i\to (n-i)$ in the upper part, called an {\em $X$-block\/}, is the submatrix $X_{I}^{J}$ with $I=[\alpha,n]$ and $J=[1,\beta]$, where $\alpha=(n-i+1)_-+1$ is the leftmost point of the $X$-run containing $n-i+1$, and $\beta=i_+$ is the rightmost point of the $X$-run containing $i$. The entry $(n-i+1,1)$ is called the {\em exit point\/} of the $X$-block. Similarly, the block corresponding to a horizontal edge $i\to (n-i)$ in the lower part, called a {\em $Y$-block\/}, is the submatrix $Y_{\bar I}^{\bar J}$ with $\bar I=[1,\bar\alpha]$ and $\bar J=[\bar\beta,n]$, where $\bar\alpha=i_+$ is the rightmost point of the $Y$-run containing $i$ and $\bar\beta=(n-i+1)_-+1$ is the leftmost point of the $Y$-run containing $n-i+1$. The entry $(1,n-i+1)$ is called the {\em exit point\/} of the $Y$-block. In the example shown in Fig.~\ref{fig:BDgraph} on the left, the edge $5\to2$ in the upper part defines the $X$-block $X_{[1,7]}^{[1,5]}$ with the exit point $(3,1)$, the edge $4\to3$ in the lower part defines the $Y$-block $Y_{[1,5]}^{[3,7]}$ with the exit point $(1,4)$, and the edge $1\to6$ in the upper part defines the $X$-block $X_{[7,7]}^{[1,3]}$ with the exit point $(7,1)$, see the left part of Fig.~\ref{fig:blockmatrix} where the exit points of the blocks are circled.
\begin{figure}
\caption{Blocks and their gluing}
\label{fig:blockmatrix}
\end{figure}
The number of directed edges is odd and the blocks of different types alternate; therefore, if this number equals $4b-1$, then there are $b$ blocks of each type. If there are $4b-3$ directed edges, there are $b$ blocks of one type and $b-1$ blocks of the other type. By adding at most two dummy blocks with empty sets of rows or columns at the beginning and at the end of the sequence, we may assume that the number of blocks of each type is equal, and that the first block is of $X$-type.
The blocks are glued together with the help of inclined edges whose head and tail belong to different parts of the graph. An inclined edge $i\to j$ directed downwards stipulates placing the entry $(j,n)$ of the $Y$-block defined by $j\to (n-j)$ immediately to the left of the entry $(i,1)$ of the $X$-block defined by $(n-i)\to i$. In other words, the two blocks are glued in such a way that $\Delta(\alpha)$ and $\bar\Delta(\bar\alpha)=\gamma(\Delta(\alpha))$ coincide. Similarly, an inclined edge $i\to j$ directed upwards stipulates placing the entry $(n,j)$ of the $X$-block defined by $j\to(n-j)$ immediately above the entry $(1,i)$ of the $Y$-block defined by $(n-i)\to i$. In other words, the two blocks are glued in such a way that $\bar\Delta(\bar\beta)$ and $\Delta(\beta)=\gamma^*(\bar\Delta(\bar\beta))$ coincide. Clearly, the exit points of all blocks lie on the main diagonal of the resulting matrix. For example, the directed path $5\to2\to4\to3\to1\to6$ in the BD graph shown in Fig.~\ref{fig:BDgraph} on the left defines the gluing shown in Fig.~\ref{fig:blockmatrix} on the right. The runs along which the blocks are glued are shown in bold. The same path traversed in the opposite direction defines a matrix glued from the blocks $X_{[1,7]}^{[1,6]}$, $Y_{[1,5]}^{[3,7]}$ and $X_{[6,7]}^{[1,3]}$.
Given an aperiodic pair $(\bfG^{{\rm r}}, \bfG^{{\rm c}})$ and the decomposition of $G_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ into maximal alternating paths, the blocks are defined in a similar way. To each edge $i\to (n-i)$ in the upper part of $G_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$, assign the block $X_{I}^{J}$ with $I=[\alpha,n]$ and $J=[1,\beta]$, where $\alpha=(n-i+1)_-(\bfG^{{\rm r}})+1$ and $\beta=i_+(\bfG^{{\rm c}})$ are defined by $X$-runs exactly as before except with respect to different BD triples $\bfG^{{\rm r}}$ and $\bfG^{{\rm c}}$. Similarly, the block corresponding to a horizontal edge $i\to (n-i)$ in the lower part is the submatrix $Y_{\bar I}^{\bar J}$ with $\bar I=[1,\bar\alpha]$ and $\bar J=[\bar\beta,n]$, where $\bar\alpha=i_+(\bfG^{{\rm r}})$ and $\bar\beta=(n-i+1)_-(\bfG^{{\rm c}})+1$ are defined by $Y$-runs. These blocks are glued together in the same fashion as before, except that gluing of a $Y$-block to an $X$-block on the left (respectively, at the bottom) is governed by the row triple $\bfG^{{\rm r}}$ (respectively, the column triple $\bfG^{{\rm c}}$). In what follows, we will call $X-$ and $Y-$runs corresponding to $\bfG^{{\rm r}}$ (respectively, to $\bfG^{{\rm c}}$) {\em row\/} (respectively, {\em column\/}) runs.
Let ${\mathcal L}={\mathcal L}(X,Y)$ denote the matrix glued from $X$- and $Y$-blocks as explained above. It follows immediately from the construction that if ${\mathcal L}$ is defined by an alternating path $i_1\to i_2\to\dots\to i_{2k}$ then it is a square $N({\mathcal L})\times N({\mathcal L})$ matrix with \begin{equation*} N({\mathcal L})=\sum_{j=1}^k i_{2j-1}. \end{equation*} The matrices ${\mathcal L}$ defined by all maximal alternating paths in $G_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ form a collection denoted ${\mathbf L}={\mathbf L}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ (or ${\mathbf L}_\mathbf \Gamma$ if $\bfG^{{\rm r}}=\bfG^{{\rm c}}=\mathbf \Gamma$). Thus,
(i) each ${\mathcal L}\in{\mathbf L}$ is a square $N({\mathcal L})\times N({\mathcal L})$ matrix,
(ii) for any $1\leq i< j \leq n$, there is a unique pair $({\mathcal L} \in {\mathbf L}, s\in [1,N({\mathcal L})])$ such that ${\mathcal L}_{ss}=y_{i j}$, and
(iii) for any $1\leq j< i \leq n$, there exists
and a unique pair $({\mathcal L} \in {\mathbf L}, s\in [1,N({\mathcal L})])$ such that ${\mathcal L}_{ss}=x_{ij}$.
We thus have a bijection $\mathcal J=\mathcal J_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ between $[1,n]\times [1,n]\setminus\cup_{i=1}^n(i,i)$ and the set of pairs $\left \{ ({\mathcal L}, s) : {\mathcal L} \in {\mathbf L}, s\in [1, N({\mathcal L})] \right \}$ that takes a pair $(i,j)$, $i\ne j$, to $({\mathcal L}(i,j), s(i,j))$. We then define \begin{equation} \label{f_ij_gen} {\tt f}_{ij}(X,Y)= \det {\mathcal L}(i,j)_{[s(i,j), N({\mathcal L}(i,j))]}^{[s(i,j), N({\mathcal L}(i,j))]}, \quad i\ne j. \end{equation} The block of ${\mathcal L}(i,j)$ that contains the entry $(s(i,j),s(i,j))$ is called the {\it leading block\/} of ${\tt f}_{ij}$.
Additionally, we define \begin{equation} \label{twof_ii} {\tt f}_{ii}^<(X,Y)=\det X_{[i,n]}^{[i,n]}, \qquad {\tt f}_{ii}^>(X,Y)=\det Y_{[i,n]}^{[i,n]}. \end{equation} The leading block of ${\tt f}_{ii}^<$ is $X$, and the leading block of ${\tt f}_{ij}^>$ is $Y$. Note that \eqref{twof_ii} means that $s$ is extended to the diagonal via $s(i,i)=i$, while ${\mathcal L}(i,i)$ is not defined uniquely: it might denote either $X$ or $Y$.
Finally, we put $f_{ij}(X) ={\tt f}_{ij}(X,X)$ for $i\ne j$ and $f_{ii}(X) ={\tt f}_{ii}^<(X,X)={\tt f}_{ii}^>(X,X)$, and define $$ F=F_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}=\{ f_{ij}(X) : i,j\in[1,n]\}. $$
\begin{theorem}\label{logcanbasis} Let $(\bfG^{{\rm r}},\bfG^{{\rm c}})$ be an oriented aperiodic pair of BD triples, then the family $F_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ forms a log-canonical coordinate system with respect to the Poisson bracket \eqref{sklyabragen} on $\operatorname{Mat}_n$ with $r=r^{{\rm r}}$ and $r'=r^{{\rm c}}$ given by \eqref{r-matrix}. \end{theorem}
\begin{remark} A log-canonical coordinate system on $SL_n$ with respect to the same bracket is formed by $F_{\bfG^{{\rm r}},\bfG^{{\rm c}}}\setminus\{\det X\}$. \end{remark}
Although the construction of the family of functions $F_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ is admittedly {\em ad hoc}, the intuition behind it is given by the collection ${\mathbf L}={\mathbf L}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ that does have an intrinsic meaning. Recall the observation we previously utilized in \cite{GSVMem}: a function serving as a frozen variable in a cluster structure on a Poisson variety has a property that it is log-canonical with every cluster variable in every cluster. The vanishing locus of such a function foliates into a union of non-generic symplectic leaves. On the other hand, in many examples of Poisson varieties supporting a cluster structure, the union of generic symplectic leaves forms an open orbit of a certain natural group action. Thus, it makes sense to select semi-invariants of this group action as frozen variables. Furthermore, a global toric action on the cluster structure arising this way can be described in two equivalent ways: it is generated by an action of a commutative subgroup of the group acting on the underlying Poisson variety or, alternatively, by Hamiltonian flows generated by the frozen variables.
In our current situation, the group action is determined by the BD data $\bfG^{{\rm r}}$, $\bfG^{{\rm c}}$. Let ${\mathfrak d}_-^{\rm r}$ and ${\mathfrak d}_-^{\rm c}$ be subalgebras defined in \eqref{ddeco} that correspond to $\bfG^{{\rm r}}$ and $\bfG^{{\rm c}}$, respectively, and let ${\mathcal D}_-^{\rm r}=\exp ({\mathfrak d}_-^{\rm r})$ and ${\mathcal D}_-^{\rm c}=\exp ({\mathfrak d}_-^{\rm c})$ be the corresponding subgroups of the double. Consider the action of ${\mathcal D}_-^{\rm r}\times{\mathcal D}_-^{\rm c}$ on the double $D(GL_n)$ with ${\mathcal D}_-^{\rm r}$ acting on the left and ${\mathcal D}_-^{\rm c}$ acting on the right.
\begin{proposition}\label{frozen} Let ${\mathcal L}(X,Y) \in {\mathbf L}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$. Then
{\rm (i)} $\det{\mathcal L}(X,Y)$ is a semi-invariant of the action of ${\mathcal D}_-^{\rm r}\times{\mathcal D}_-^{\rm c}$ described above;
{\rm (ii)} $\det{\mathcal L}(X,X)$ is log-canonical with all matrix entries $x_{ij}$ with respect to the Poisson bracket \eqref{sklyabragen}. \end{proposition}
Consequently, we select the subcollection $\{\det{\mathcal L}(X,X): {\mathcal L}\in {\mathbf L}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}\}\cup\{\det X\}\subset F_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ as the set of frozen variables.
\subsection{The quiver}\label{thequiver} Let us choose the family $F_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ as the initial cluster for our cluster structure. We now define the quiver $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ that corresponds to this cluster.
The quiver has $n^2$ vertices labeled $(i,j)$. The function attached to a vertex $(i,j)$ is $f_{ij}$. Any vertex except for $(n,n)$ is frozen if and only if its degree is at most three. The vertex $(n,n)$ is never frozen.
We will show below that frozen vertices correspond bijectively to the determinants of the matrices ${\mathcal L}\in{\mathbf L}\cup \{X\}$, as suggested by Proposition \ref{frozen}.
\begin{figure}
\caption{The neighborhood of a vertex $(i,j)$, $1<i,j<n$}
\label{fig:ijnei}
\end{figure}
A vertex $(i,j)$ for $1<i<n$, $1<j<n$ has degree six, and its neighborhood looks as shown in Fig.~\ref{fig:ijnei}. Here and in what follows, mutable vertices are depicted by circles, frozen vertices by squares, and vertices of unspecified nature by ellipsa.
A vertex $(1,j)$ for $1<j<n$ can have degree two, three, five, or six. If $\bfG^{{\rm c}}$ stipulates both inclined edges $(j-1)\to (k-1)$ and $j\to k$ in the graph $G_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ for some $k$, that is, if ${\gamma^\ec}(k-1)=j-1$ and ${\gamma^\ec}(k)=j$, then the degree of $(1,j)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals six, and its neighborhood looks as shown in Fig.~\ref{fig:1jnei}(a).
If $\bfG^{{\rm c}}$ stipulates only the edge $(j-1)\to (k-1)$ as above but not the other one, that is, if ${\gamma^\ec}(k-1)=j-1$ and $j\notin \Gamma_2^{\rm c}$, the degree of $(1,j)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals five, and its neighborhood looks as shown in Fig.~\ref{fig:1jnei}(b).
If $\bfG^{{\rm c}}$ stipulates only the edge $j\to k$ as above but not the other one, that is, if $j-1\notin \Gamma_2^{\rm c}$ and ${\gamma^\ec}(k)=j$, the degree of $(1,j)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals three, and its neighborhood looks as shown in Fig.~\ref{fig:1jnei}(c).
Finally, if $\bfG^{{\rm c}}$ does not stipulate any one of the above two inclined edges in $G_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, that is, if $j-1,j\notin \Gamma_2^{\rm c}$, the degree of $(1,j)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals two, and its neighborhood looks as shown in Fig.~\ref{fig:1jnei}(d).
\begin{figure}
\caption{Possible neighborhoods of a vertex $(1,j)$, $1<j<n$}
\label{fig:1jnei}
\end{figure}
Similarly, a vertex $(i,1)$ for $1<i<n$ can have degree two, three, five, or six. If $\bfG^{{\rm r}}$ stipulates both inclined edges $(i-1)\to (k-1)$ and $i\to k$ in the graph $G_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ for some $k$, that is, if ${\gamma^\er}(i-1)=k-1$ and ${\gamma^\er}(i)=k$, then the degree of $(i,1)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals six, and its neighborhood looks as shown in Fig.~\ref{fig:i1nei}(a).
If $\bfG^{{\rm r}}$ stipulates only the edge $(i-1)\to (k-1)$ as above but not the other one, that is, if ${\gamma^\er}(i-1)=k-1$ and $i\notin \Gamma_1^{\rm r}$, the degree of $(i,1)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals five, and its neighborhood looks as shown in Fig.~\ref{fig:i1nei}(b).
If $\bfG^{{\rm r}}$ stipulates only the edge $i\to k$ as above but not the other one, that is, if $i-1\notin \Gamma_1^{\rm r}$ and ${\gamma^\er}(i)=k$, the degree of $(i,1)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals three, and its neighborhood looks as shown in Fig.~\ref{fig:i1nei}(c).
Finally, if $\bfG^{{\rm r}}$ does not stipulate any one of the above two inclined edges in $G_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, that is, if $i-1,i\notin \Gamma_1^{\rm r}$, the degree of $(i,1)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals two, and its neighborhood looks as shown in Fig.~\ref{fig:i1nei}(d).
\begin{figure}
\caption{Possible neighborhoods of a vertex $(i,1)$, $1<i<n$}
\label{fig:i1nei}
\end{figure}
A vertex $(n,j)$ for $1<j<n$ can have degree four, five, or six. If $\bfG^{{\rm c}}$ stipulates both inclined edges $(k-1)\to (j-1)$ and $k\to j$ in the graph $G_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ for some $k$, that is, if ${\gamma^\ec}(j-1)=k-1$ and ${\gamma^\ec}(j)=k$, then the degree of $(n,j)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals six, and its neighborhood looks as shown in Fig.~\ref{fig:njnei}(a).
If $\bfG^{{\rm c}}$ stipulates only the edge $(k-1)\to (j-1)$ as above but not the other one, that is, if ${\gamma^\ec}(j-1)=k-1$ and $j\notin \Gamma_1^{\rm c}$, the degree of $(n,j)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals five, and its neighborhood looks as shown in Fig.~\ref{fig:njnei}(b).
If $\bfG^{{\rm c}}$ stipulates only the edge $k\to j$ as above but not the other one, that is, if $j-1\notin \Gamma_1^{\rm c}$ and ${\gamma^\ec}(j)=k$, the degree of $(n,j)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals five as well, and its neighborhood looks as shown in Fig.~\ref{fig:njnei}(c).
Finally, if $\bfG^{{\rm c}}$ does not stipulate any one of the above two inclined edges in $G_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, that is, if $j-1,j\notin \Gamma_1^{\rm c}$, the degree of $(n,j)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals four, and its neighborhood looks as shown in Fig.~\ref{fig:njnei}(d).
\begin{figure}
\caption{Possible neighborhoods of a vertex $(n,j)$, $1<j<n$}
\label{fig:njnei}
\end{figure}
Similarly, a vertex $(i,n)$ for $1<i<n$ can have degree four, five, or six. If $\bfG^{{\rm r}}$ stipulates both inclined edges $(k-1)\to (i-1)$ and $k\to i$ in the graph $G_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ for some $k$, that is, if ${\gamma^\er}(k-1)=i-1$ and ${\gamma^\er}(k)=i$, then the degree of $(i,n)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals six, and its neighborhood looks as shown in Fig.~\ref{fig:innei}(a).
If $\bfG^{{\rm r}}$ stipulates only the edge $(k-1)\to (i-1)$ as above but not the other one, that is, if ${\gamma^\er}(k-1)=i-1$ and $i\notin \Gamma_2^{\rm r}$, the degree of $(i,n)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals five, and its neighborhood looks as shown in Fig.~\ref{fig:innei}(b).
If $\bfG^{{\rm r}}$ stipulates only the edge $k\to i$ as above but not the other one, that is, if $i-1\notin \Gamma_2^{\rm r}$ and ${\gamma^\er}(k)=i$, the degree of $(i,n)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals five as well, and its neighborhood looks as shown in Fig.~\ref{fig:innei}(c).
Finally, if $\bfG^{{\rm r}}$ does not stipulate any one of the above two inclined edges in $G_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, that is, if $i-1,i\notin \Gamma_2^{\rm r}$, the degree of $(i,n)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals four, and its neighborhood looks as shown in Fig.~\ref{fig:innei}(d).
\begin{figure}
\caption{Possible neighborhoods of a vertex $(i,n)$, $1<i<n$}
\label{fig:innei}
\end{figure}
The vertex $(1,n)$ can have degree one, two, four, or five. If $\bfG^{{\rm c}}$ stipulates an inclined edge $(n-1)\to j$ for some $j$, and $\bfG^{{\rm r}}$ stipulates an inclined edge $i\to 1$ for some $i$, that is, if ${\gamma^\ec}(j)=n-1$ and ${\gamma^\er}(i)=1$, then the degree of $(1,n)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals five, and its neighborhood looks as shown in Fig.~\ref{fig:1nnei}(a).
If only the first of the above two edges is stipulated, that is, if ${\gamma^\ec}(j)=n-1$ and $1\notin \Gamma_2^{\rm r}$, the degree of $(1,n)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals four, and its neighborhood looks as shown in Fig.~\ref{fig:1nnei}(b).
If only the second of the above two edges is stipulated, that is, if ${\gamma^\er}(i)=1$ and $n-1\notin \Gamma_2^{\rm c}$, the degree of $(1,n)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals two, and its neighborhood looks as shown in Fig.~\ref{fig:1nnei}(c).
Finally, if none of the above two edges is stipulated, that is, if $1\notin \Gamma_2^{\rm r}$ and $n-1\notin \Gamma_2^{\rm c}$, the degree of $(1,n)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals one, and its neighborhood looks as shown in Fig.~\ref{fig:1nnei}(d).
\begin{figure}
\caption{Possible neighborhoods of the vertex $(1,n)$}
\label{fig:1nnei}
\end{figure}
Similarly, the vertex $(n,1)$ can have degree one, two, four, or five. If $\bfG^{{\rm r}}$ stipulates an inclined edge $(n-1)\to j$ for some $j$, and $\bfG^{{\rm c}}$ stipulates an inclined edge $i\to 1$ for some $i$, that is, if ${\gamma^\er}(n-1)=j$ and ${\gamma^\ec}(1)=i$, then the degree of $(n,1)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals five, and its neighborhood looks as shown in Fig.~\ref{fig:n1nei}(a).
If only the first of the above two edges is stipulated, that is, if ${\gamma^\er}(n-1)=j$ and $1\notin \Gamma_1^{\rm c}$, the degree of $(n,1)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals four, and its neighborhood looks as shown in Fig.~\ref{fig:n1nei}(b).
If only the second of the above two edges is stipulated, that is, if ${\gamma^\ec}(1)=i$ and $n-1\notin \Gamma_1^{\rm r}$, the degree of $(n,1)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals two, and its neighborhood looks as shown in Fig.~\ref{fig:n1nei}(c).
Finally, if none of the above two edges is stipulated, that is, if $1\notin \Gamma_1^{\rm c}$ and $n-1\notin \Gamma_1^{\rm r}$, the degree of $(n,1)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals one, and its neighborhood looks as shown in Fig.~\ref{fig:n1nei}(d).
\begin{figure}
\caption{Possible neighborhoods of the vertex $(n,1)$}
\label{fig:n1nei}
\end{figure}
The vertex $(n,n)$ can have degree three, four, or five. If $\bfG^{{\rm r}}$ stipulates an inclined edge $i\to(n-1)$ for some $i$, and $\bfG^{{\rm c}}$ stipulates an inclined edge $j\to(n-1)$ for some $j$, that is, if ${\gamma^\er}(i)=n-1$ and ${\gamma^\ec}(n-1)=j$, then the degree of $(n,n)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals five, and its neighborhood looks as shown in Fig.~\ref{fig:nnnei}(a).
If only one of the above two edges is stipulated, that is, if either ${\gamma^\er}(i)=n-1$ and $n-1\notin \Gamma_1^{\rm c}$, or ${\gamma^\ec}(n-1)=j$ and $n-1\notin \Gamma_2^{\rm r}$, the degree of $(n,n)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals four, and its neighborhood looks as shown in Fig.~\ref{fig:nnnei}(b,c).
Finally, if none of the above two edges is stipulated, that is, if $n-1\notin \Gamma_1^{\rm c}$ and $n-1\notin \Gamma_2^{\rm r}$, the degree of $(n,n)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals three, and its neighborhood looks as shown in Fig.~\ref{fig:nnnei}(d).
\begin{figure}
\caption{Possible neighborhoods of the vertex $(n,n)$}
\label{fig:nnnei}
\end{figure}
Finally, the vertex $(1,1)$ can have degree one, two, or three. If $\bfG^{{\rm r}}$ stipulates an inclined edge $1\to i$ for some $i$, and $\bfG^{{\rm c}}$ stipulates an inclined edge $1\to j$ for some $j$, that is, if ${\gamma^\er}(1)=i$ and ${\gamma^\ec}(j)=1$, then the degree of $(1,1)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals three, and its neighborhood looks as shown in Fig.~\ref{fig:11nei}(a).
If only one of the above two edges is stipulated, that is, if either ${\gamma^\er}(1)=i$ and $1\notin \Gamma_2^{\rm c}$, or ${\gamma^\ec}(j)=1$ and $1\notin \Gamma_1^{\rm r}$, the degree of $(n,n)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals two, and its neighborhood looks as shown in Fig.~\ref{fig:11nei}(b,c).
If none of the above two edges is stipulated, that is, if $1\notin \Gamma_2^{\rm c}$ and $1\notin \Gamma_1^{\rm r}$, the degree of $(1,1)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals one, and its neighborhood looks as shown in Fig.~\ref{fig:11nei}(d).
\begin{figure}
\caption{Possible neighborhoods of the vertex $(1,1)$}
\label{fig:11nei}
\end{figure}
We can now prove the characterization of frozen vertices mentioned at the beginning of the section.
\begin{proposition}\label{frozenvert} A vertex $(i,j)$ is frozen in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ if and only if $i=j=1$ and $f_{11}=\det X$ or $f_{ij}$ is the restriction to the diagonal $X=Y$ of $\det{\mathcal L}$ for some ${\mathcal L}\in{\mathbf L}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$. \end{proposition}
\begin{proof} It follows from the description of the quiver that there are two types of frozen vertices distinct from $(1,1)$: vertices $(1,j)$ such that $j-1\notin\Gamma^{\rm c}_2$, see Fig.~\ref{fig:1jnei}(c),(d) and Fig.~\ref{fig:1nnei}(c),(d), and vertices $(i,1)$ such that $i-1\notin\Gamma^{\rm r}_1$, see Fig.~\ref{fig:i1nei}(c),(d) and Fig.~\ref{fig:n1nei}(c),(d).
In the first case, the horizontal edge $(n-j+2)\to(j-1)$ in the lower part of $G_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ is the last edge of a maximal alternating path. Therefore, the $Y$-block defined by this edge is the uppermost block of the matrix ${\mathcal L}$ corresponding to this path. Consequently, $\bar\beta=(j-1)_-({\Gamma^{\rm c}})+1=j$, and hence $(1,j)$ is indeed the upper left entry of ${\mathcal L}$.
The second case is handled in a similar manner. \end{proof}
The quiver $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ shown in Fig.~\ref {fig:quiver} corresponds to the BD data $\bfG^{{\rm r}}=\left(\{1,2\}, \right.$ $\left. \{2,3\}, 1\mapsto 2, 2\mapsto 3\right)$ and $\bfG^{{\rm c}}=\left(\{1,2\}, \{3,4\}, 1\mapsto3, 2\mapsto4\right)$ in $GL_5$. The corresponding graph $G_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ is shown on the left in Fig.~\ref{fig:altpaths}. For example, consider the vertex $(1,4)$ and note that $G_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ contains both edges $\bar4\to2$ and $\bar3\to1$. Consequently, the first of the above conditions for the vertices of type $(1,j)$ holds with $k=2$, and hence $(1,4)$ has outgoing edges $(1,4)\to(5,2)$, $(1,4)\to(2,5)$, and $(1,4)\to(1,3)$, and ingoing edges $(5,1)\to(1,4)$, $(1,5)\to(1,4)$, and $(2,4)\to(1,4)$. Alternatively, consider the vertex $(4,5)$ and note that $G_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}$ contains the edge $2\to\bar3$, while $4\notin \Gamma_2^{\rm r}$. Consequently, the second of the above conditions for the vertices of type $(j,n)$ holds with $k=3$, and hence $(4,5)$ has outgoing edges $(4,5)\to(4,4)$ and $(4,5)\to(3,5)$ and ingoing edges $(3,4)\to(4,5)$, $(3,1)\to (4,5)$, and $(5,5)\to(4,5)$.
\begin{figure}\label{fig:quiver}
\end{figure}
\begin{theorem}\label{quiver} Let $(\bfG^{{\rm r}},\bfG^{{\rm c}})$ be an oriented aperiodic pair of BD triples, then the quiver $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ defines a cluster structure compatible with the Poisson bracket \eqref{sklyabragen} on $\operatorname{Mat}_n$ with $r=r^{{\rm r}}$ and $r'=r^{{\rm c}}$ given by \eqref{r-matrix}. \end{theorem}
\begin{remark} The quiver that defines a cluster structure compatible with the same bracket on $SL_n$ is obtained from $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ by deleting the vertex $(1,1)$. \end{remark}
\subsection{Outline of the proof}\label{outline} The proof of Theorem \ref{logcanbasis} is based on lengthy and rather involved calculations. Following the strategy introduced in \cite{GSVMem}, we consider the bracket \eqref{sklyadoublegen} on the Drinfeld double of $SL_n$ and lift it to a bracket on $\operatorname{Mat}_n\times\operatorname{Mat}_n$. The family $F_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ is obtained as the restriction onto the diagonal $X=Y$ of the family ${\tt F}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ of functions defined on $\operatorname{Mat}_n\times\operatorname{Mat}_n$ via $$ {\tt F}={\tt F}_{\bfG^{{\rm r}}, \bfG^{{\rm c}}}=\{{\tt f}_{ij}(X,Y) : i,j\in[1,n], i\ne j\}
\cup\{{\tt f}_{ii}^<(X,Y), {\tt f}_{ii}^>(X,Y):i\in[1,n]\}, $$ see \eqref{f_ij_gen}, \eqref{twof_ii}. The bracket of a pair of functions $f,g\in {\tt F}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ is decomposed into a large number of contributions that either vanish, or are proportional to the product $fg$. In the process we repeatedly use invariance properties of functions in ${\tt F}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ with respect to the right and left action of certain subgroups of the double.
The proof of Theorem \ref{quiver} is based on the standard characterization of Poisson structures compatible with a given cluster structure, see e.g. \cite[Ch.~4]{GSVb}. Note that the number of frozen variables in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals $1+k_{\bfG^{{\rm r}}}+k_{\bfG^{{\rm c}}}$, and that $\det X$ is frozen. As an immediate consequence we get Theorem \ref{genmainth}(i), which for $\bfG^{{\rm r}}=\bfG^{{\rm c}}$ turns into Theorem \ref{mainth}(i).
The proof of Theorem \ref{genmainth}(iii) is based on the claim that right hand sides of all exchange relations in one cluster are semi-invariants of the left-right action of $\mathcal H_{\bfG^{{\rm r}}}\times\mathcal H_{\bfG^{{\rm c}}}$, see Lemma~\ref{rlsemi}. It also involves the regularity check for all clusters adjacent to the initial one, see Theorem~\ref{regneighbors}. Theorem \ref{mainth}(iii) follows when $\bfG^{{\rm r}}=\bfG^{{\rm c}}$. After this is done, Theorem \ref{mainth}(iv) and (v) follow from Theorem \ref{quiver} via \cite[Theorem~4.1]{GSVM}. To get Theorem~\ref{genmainth}(iv) and (v) we need a generalization of the latter result to the case of two different tori, which is straightforward.
The central part of the paper is the proof of Theorem \ref{genmainth}(ii) (Theorem \ref{mainth}(ii) then follows in the case $\bfG^{{\rm r}}=\bfG^{{\rm c}}$). It relies on Proposition~2.1 in \cite{GSVMem}, which is reproduced below for readers' convenience.
\begin{proposition}\label{regfun} Let $V$ be a Zariski open subset in ${\mathbb C}^{n+m}$ and ${\mathcal C}$ be a cluster structure in ${\mathbb C}(V)$ with $n$ cluster and $m$ frozen variables such that
{\rm(i)} there exists a cluster $(f_1,\dots,f_{n+m})$ in ${\mathcal C}$ such that $f_i$ is regular on $V$ for $i\in [1,n+m]$;
{\rm(ii)} any cluster variable $f_k'$ adjacent to $f_k$, $k\in [1,n]$, is regular on $V$;
{\rm(iii)} any frozen variable $f_{n+i}$, $i\in [1,m]$, vanishes at some point of $V$;
{\rm(iv)} each regular function on $V$ belongs to $\overline{\A}_{\mathbb C}({\mathcal C})$.
\noindent Then ${\mathcal C}$ is a regular cluster structure and $\overline{\A}_{\mathbb C}({\mathcal C})$ is naturally isomorphic to ${\mathcal O}(V)$. \end{proposition}
Conditions (i) and (iii) are established via direct observation, and condition (ii) was already discussed above. Therefore, the main task is to check condition (iv). Note that Theorem \ref{genmainth}(i) and Theorem 3.11 in \cite{GSVD} imply that it is enough to check that every matrix entry can be written as a Laurent polynomial in the initial cluster and
in any cluster adjacent to the initial one. In \cite{GSVMem} this goal was achieved by constructing two distinguished sequences of mutations. Here we suggest a new approach: induction on the total size $|\Gamma^{\rm r}_1|+|\Gamma^{\rm c}_1|$. Let $\tilde\mathbf \Gamma$ be the BD triple obtained from $\mathbf \Gamma$ by removing a certain root $\alpha$ from $\Gamma_1$ and the corresponding
root $\gamma(\alpha)$ from $\Gamma_2$. Given an aperiodic pair $(\bfG^{{\rm r}}, \bfG^{{\rm c}})$ with $|\Gamma^{\rm r}_1|>0$, we choose $\alpha$ to be the rightmost root in an arbitrary nontrivial row $X$-run $\Delta^{\rm r}$ and define an aperiodic pair $(\tilde\bfG^{{\rm r}},\bfG^{{\rm c}})$. Since the total size of this pair is smaller, we assume that $\tilde{\mathcal C}={\mathcal C}_{\tilde\bfG^{{\rm r}},\bfG^{{\rm c}}}$ possesses the above mentioned Laurent property. Recall that both ${\mathcal C}$ and $\tilde {\mathcal C}$ are cluster structures on the space of regular functions on $\operatorname{Mat}_n$. To distinguish between them, the matrix entries in the latter are denoted $z_{ij}$; they form an $n\times n$ matrix $Z=(z_{ij})$.
Let $F =\{ f_{ij}(X) {:\ } i,j\in[1,n]\}$ and $\tilde F=\{ \tilde f_{ij}(Z) {:\ } i,j\in[1,n]\}$ be initial clusters for ${\mathcal C}$ and $\tilde {\mathcal C}$, respectively, and $Q$ and $\tilde Q$ be the corresponding quivers. It is easy to see that all maximal alternating paths in $G_{{\Gamma^{\rm r}},{\Gamma^{\rm c}}}$ are preserved in $G_{\tilde\bfG^{{\rm r}},\bfG^{{\rm c}}}$ except for the path that goes through the directed inclined edge $\alpha\to \gamma(\alpha)$. The latter one is split into two: the initial segment up to the vertex $\alpha$ and the closing segment starting with the vertex $\gamma(\alpha)$. Consequently, the only difference between $Q$ and $\tilde Q$ is that the vertex $v=(\alpha+1,1)$ that corresponds to the endpoint of the initial segment is mutable in $Q$ and frozen in $\tilde Q$, and that certain three edges incident to $v$ in $Q$ do not exist in $\tilde Q$
Let us consider four fields of rational functions in $n^2$ independent variables: ${\mathcal X}={\mathbb C}(x_{11},\dots,x_{nn})$, ${\mathcal Z}={\mathbb C}(z_{11},\dots,z_{nn})$, ${\mathcal F}={\mathbb C}(\varphi_{11},\dots,\varphi_{nn})$, and $\tilde{\mathcal F}={\mathbb C}(\tilde\fy_{11},\dots,\allowbreak\tilde\fy_{nn})$. Polynomial maps $f: {\mathcal F}\to{\mathcal X}$ and $\tilde f: \tilde{\mathcal F}\to{\mathcal Z}$ are given by $\varphi_{ij}\mapsto f_{ij}(X)$ and $\tilde\fy_{ij}\mapsto \tilde f_{ij}(Z)$. By the induction hypothesis, there exists a map $\tilde P: {\mathcal Z}\to\tilde {\mathcal F}$ that takes $z_{ij}$ to a Laurent polynomial in variables $\tilde\fy_{\alpha\beta}$ such that $\tilde f\circ\tilde P={\operatorname {Id}}$. Note that the polynomials $\tilde f_{ij}(Z)$ are algebraically independent, and hence $\tilde f$ is an isomorphism. Consequently, $\tilde P\circ \tilde f={\operatorname {Id}}$ as well. Our first goal is to build a map $P: {\mathcal X}\to{\mathcal F}$ that takes $x_{ij}$ to a Laurent polynomial in variables $\varphi_{\alpha\beta}$ and satisfies condition $f\circ P={\operatorname {Id}}$.
We start from the following result.
\begin{theorem}\label{prototype} There exist a birational map $U: {\mathcal X}\to {\mathcal Z}$ and an invertible polynomial map $T: {\mathcal F} \to \tilde{\mathcal F}$ satisfying the following conditions:
a) $\tilde f\circ T=U\circ f$;
b) the denominator of any $U(x_{ij})$ is a power of $\tilde f_v(Z)$;
c) the inverse of $T$ is a monomial transformation. \end{theorem}
Put $P=T^{-1}\circ\tilde P\circ U$; it is a map ${\mathcal X}\to{\mathcal F}$, and by a) and the induction hypothesis, \[ P\circ f=T^{-1}\circ\tilde P\circ U\circ f=T^{-1}\circ\tilde P\tilde f\circ T=T^{-1}\circ T={\operatorname {Id}}. \] For the same reason as above this yields $f\circ P={\operatorname {Id}}$. Let us check that $P$ takes $x_{ij}$ to a Laurent polynomial in variables $\varphi_{\alpha\beta}$. Indeed, by b), $U$ takes $x_{ij}$ into a rational expression whose denominator is a power of $\tilde f_v(Z)$. Consequently, by the induction hypothesis, $\tilde P$ takes the numerator of this expression to a Laurent polynomial in $\tilde\fy_{\alpha\beta}$, and the denominator to a power of $\tilde\fy_v$. As a result, $\tilde P\circ U$ takes $x_{ij}$ to a Laurent polynomial in $\tilde\fy_{\alpha\beta}$. Finally, by c), $T^{-1}$ takes this Laurent polynomial to a Laurent polynomial in $\varphi_{\alpha\beta}$, and hence $P$ as above satisfies the required conditions.
The next goal is to implement a similar construction at all adjacent clusters. Fix an arbitrary mutable vertex $u\ne v$ in $Q$; as it was explained above, $u$ remains mutable in $\tilde Q$ as well.
Let $\mu_u(F)$ and $\mu_u(\tilde F)$ be the clusters obtained from $F$ and $\tilde F$, respectively, via the mutation in direction $u$, and let $f'_u(X)$ and $\tilde f'_u(Z)$ be cluster variables that replace $f_u(X)$ and $\tilde f_u(Z)$ in $\mu_u(F)$ and $\mu_u(\tilde F)$. Replace variables $\varphi_{u}$ and $\tilde\fy_{u}$ by new variables $\varphi'_{u}$ and $\tilde\fy'_{u}$ and define two additional fields of rational functions in $n^2$ variables: ${\mathcal F}'={\mathbb C}(\varphi_{11},\dots, \varphi'_{u}, \dots, \varphi_{nn})$ and $\tilde{\mathcal F}'={\mathbb C}(\tilde\fy_{11},\dots, \tilde\fy'_{u}, \dots, \tilde\fy_{nn})$. Similarly to the situation discussed above, there are polynomial isomorphisms $f':{\mathcal F}'\to{\mathcal X}$ and $\tilde f':\tilde {\mathcal F}'\to{\mathcal Z}$ and a Laurent map $\tilde P':{\mathcal Z}\to \tilde {\mathcal F}'$ such that $\tilde f'\circ\tilde P'={\operatorname {Id}}$ (the latter exists by the induction hypothesis).
We define a map $T': {\mathcal F}'\to\tilde {\mathcal F}'$ via $T'(\varphi_{ij})=T(\varphi_{ij})$ for $(i,j)\ne u$ and $T'(\varphi'_u)=\tilde\fy'_u\tilde\fy_v^{\lambda_u}$ for some integer $\lambda_u$ and prove that maps $U$ and $T'$ satisfy the analogs of conditions a)--c) above. Consequently, the map $P'=(T')^{-1}\circ\tilde P'\circ U$ takes each $x_{ij}$ to a Laurent polynomial in $\varphi_{11},\dots,\varphi'_u,\dots, \varphi_{nn}$ and satisfies condition $P'\circ f'={\operatorname {Id}}$.
Thus, we proved that every matrix entry can be written as a Laurent polynomial in the initial cluster $F$ of ${\mathcal C}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ and in any cluster $\mu_u(F)$ adjacent to it, except for the cluster $\mu_v(F)$. To handle this remaining cluster, we pick a different $\alpha$: the rightmost root in another nontrivial row $X$-run (if there are other nontrivial row $X$-runs),
or the leftmost root of the same row $X$-run (if it differs from the rightmost root), or the rightmost root of an arbitrary nontrivial column $X$-run and an aperiodic pair $(\bfG^{{\rm r}},\tilde\bfG^{{\rm c}})$ (if $|\Gamma^{\rm c}_1|>0$), and proceed in the same way as above. Namely, we prove the existence of the analogs of the maps $U$ and $T$ satisfying conditions a)--c) above with a different distinguished vertex $v$. Consequently, $\mu_v(F)$ is now covered by the above reasoning about adjacent clusters.
Similarly, if the initial pair $(\bfG^{{\rm r}}, \bfG^{{\rm c}})$ satisfies $|\Gamma^{\rm c}_1|>0$, we apply the same strategy starting with column
$X$-runs. It follows from the above description that the only case that cannot be treated in this way is $|\Gamma^{\rm r}_1|+|\Gamma^{\rm c}_1|=1$. It is considered as the base of induction and treated via direct calculations
We thus obtain an analog of Theorem \ref{genmainth}(ii) for the cluster structure ${\mathcal C}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ on $\operatorname{Mat}_n$. The sought-for statement for the cluster structure on $SL_n$ follows from the fact that both $\overline{\A}_{\mathbb C}({\mathcal C}_{\bfG^{{\rm r}},\bfG^{{\rm c}}})$ and ${\mathcal O}(SL_n)$ are obtained from their $\operatorname{Mat}_n$ counterparts via the restriction to $\det X=1$.
\section{Initial basis}\label{sec:basis} The goal of this Section is the proof of Theorem \ref{logcanbasis}
\subsection{The bracket}\label{sec:bra} In this paper, we only deal with $\mathfrak g = \mathfrak {sl}_n$, and hence $\mathfrak g_{\Gamma_{1}}$ and $\mathfrak g_{\Gamma_{2}}$ are subalgebras of block-diagonal matrices with nontrivial traceless blocks determined by nontrivial runs of $\Gamma_{1}$ and $\Gamma_{2}$, respectively, and zeros everywhere else. Each diagonal component is isomorphic to $\mathfrak {sl}_k$, where $k$ is the size of the corresponding run. Formula \eqref{sklyabragen}, where $R_+=R_+^{\rm c}$ and $R_+'=R_+^{\rm r}$ are given by \eqref{RplusSL} with $S$ skew-symmetric and subject to conditions \eqref{S-eq}, defines a Poisson bracket on ${\mathcal G}=SL_n$. It will be convenient to write down an extension of the bracket \eqref{sklyadoublegen} to the double $D(GL_n)$ such that its restriction to the diagonal $X=Y$ is an extension of \eqref{sklyabragen} to $GL_n$ (for brevity, in what follows we write ${\{\cdot,\cdot\}}^D$ instead of ${\{\cdot,\cdot\}}^D_{r,r'}$).
To provide an explicit expression for such an extension, we extend the maps $\gamma$ and $\gamma^*$ to the whole $\mathfrak g\mathfrak l_n$.
Namely, $\gamma$ is re-defined as the projection from $\mathfrak g\mathfrak l_n$ onto the union of diagonal blocks
specified by $\Gamma_1$, which are then moved by the Lie algebra isomorphism between $\mathfrak g_{\Gamma_{1}}$ and $\mathfrak g_{\Gamma_{2}}$ to corresponding diagonal blocks specified by $\Gamma_2$. Similarly, the adjoint map $\gamma^*$ acts as the projection to $\mathfrak g_{\Gamma_2}$ followed by the Lie algebra isomorphism that moves each diagonal block of $\mathfrak g_{\Gamma_{2}}$ back to the corresponding diagonal block of $\mathfrak g_{\Gamma_{1}}$. Consequently, \begin{equation} \label{gammaid} \begin{aligned} \gamma^*\gamma=\Pi_{\Gamma_1},\qquad \gamma\gamma^*=\Pi_{\Gamma_2},\\ \gamma\gamma^*\gamma=\gamma,\qquad \gamma^*\gamma\gamma^*=\gamma^*, \end{aligned} \end{equation} where $\Pi_{\Gamma_1}$ is the projection to $\mathfrak g_{\Gamma_{1}}$ and $\Pi_{\Gamma_2}$ is the projection to $\mathfrak g_{\Gamma_{2}}$. Note that the restriction of $\gamma$ to $\mathfrak g_{\Gamma_1}$ is nilpotent, and hence $1-\gamma$ is invertible on the whole $\mathfrak g\mathfrak l_n$.
We now view $\pi_>$, $\pi_<$ and $\pi_0$ as projections to the upper triangular, lower triangular and diagonal matrices, respectively. Additionally, define $\pi_{\geq}=\pi_>+\pi_0$, $\pi_\leq=\pi_<+\pi_0$ and for any square matrix $A$ write $A_>$, $A_<$, $A_0$, $A_\geq$, $A_\leq$ instead of $\pi_>A$, $\pi_<A$, $\pi_0A$, $\pi_\geq A$, $\pi_\leq A$, respectively. Finally, define operators $\nabla_X$ and $\nabla_Y$ via \[ \nabla_X f=\left(\frac{\partial f}{\partial x_{ji}}\right)_{i,j=1}^n,\qquad \nabla_Y f=\left(\frac{\partial f}{\partial y_{ji}}\right)_{i,j=1}^n, \] and operators \begin{align*} E_L=\nabla_X X+ \nabla_Y Y, \quad & \quad E_R=X \nabla_X + Y \nabla_Y,\\ \xi_L={\gamma^\ec}(\nabla_X X)+\nabla_Y Y,\quad & \quad \xi_R=X\nabla_X+{\gamma^\er}^*(Y\nabla_Y),\\ \eta_L=\nabla_X X+{\gamma^\ec}^*(\nabla_Y Y), \quad & \quad\eta_R={\gamma^\er}(X\nabla_X)+Y\nabla_Y \end{align*} via $E_L f=\nabla_X f \cdot X+ \nabla_Y f\cdot Y$, $E_R f=X \nabla_X f + Y \nabla_Y f$, and so on. The following simple relations will be used repeatedly in what follows: \begin{equation} \label{gammarel} \begin{aligned} \frac{1} {1 - {\gamma^\ec}} E_L = \nabla_X X + \frac{1} {1 - {\gamma^\ec}} \xi_L, \qquad&\frac{1} {1 - {\gamma^\er}} E_R = X \nabla_X + \frac{1} {1 - {\gamma^\er}} \eta_R,\\ \frac{1} {1 - {\gamma^\ec}^*} E_L = \nabla_Y Y + \frac{1} {1 - {\gamma^\ec}^*} \eta_L, \qquad&\frac{1} {1 - {\gamma^\er}^*} E_R = Y \nabla_Y + \frac{1} {1 - {\gamma^\er}^*} \xi_R,\\ \eta_L={\gamma^\ec}^*(\xi_L)+\Pi_{\hat\Gamma_1^{\rm c}}(\nabla_X X), \qquad& \eta_R={\gamma^\er}(\xi_R)+\Pi_{\hat\Gamma_2^{\rm r}}(Y\nabla_Y), \end{aligned} \end{equation} where $\Pi_{\hat\Gamma_{j}^{\rm l}}$ is the orthogonal projection complementary to $\Pi_{\Gamma_{j}^{\rm l}}$ for $j=1,2$, ${\rm l=r,c}$.
The statement below is a generalization of \cite[Lemma 4.1]{GSVMem}.
\begin{theorem}\label{doublebrack} The bracket \eqref{sklyadoublegen} on the double $D(GL_n)$ is given by \begin{multline} \label{bracket} \{f^1,f^2\}^D(X,Y) =\left\langle R^{{\rm c}}_+(E_L f^1),E_L f^2\right\rangle-\left\langle R^{{\rm r}}_+(E_R f^1),E_R f^2\right\rangle\\+ \left\langle X\nabla_{X} f^1, Y\nabla_{Y} f^2\right\rangle-\left\langle \nabla_X f^1\cdot X, \nabla_Y f^2\cdot Y\right\rangle, \end{multline} where \begin{multline} \label{rplusfin} R^{{\rm l}}_+(\zeta)=\frac1{1-\gamma^{\rm l}}\zeta_{\ge}-\frac{{\gamma^{\rm l}}^*}{1-{\gamma^{\rm l}}^*}\zeta_{<}\\-\frac12 \left (\frac{\gamma^{\rm l}}{1-\gamma^{\rm l}} + \frac{1}{1 - {\gamma^{\rm l}}^*}\right ) \zeta_0 - \frac 1 n \left(\operatorname{Tr}(\zeta){\mathbf S}^{\rm l} - \operatorname{Tr} \left(\zeta{\mathbf S}^{\rm l}\right)\mathbf 1\right) \end{multline} with \[ {\mathbf S}^{\rm l} = \frac 12 \left(\frac 1{1-\gamma^{\rm l}} - \frac 1{1 - {\gamma^{\rm l}}^*}\right )\mathbf 1 \] for $\rm l=r,c$. \end{theorem}
\begin{proof} We need to ``tweak'' $R_+$ to extend the bracket \eqref{sklyabragen} to $GL_n$ in such a way that the function $\det$ is a Casimir function. This is guaranteed by requiring that $R_+$ is extended to an operator on $\mathfrak g\mathfrak l_n$ which coincides with the one given by \eqref{RplusSL} on $\mathfrak {sl}_n$ and for which $\mathbf 1 \in \mathfrak g\mathfrak l_n$ is an eigenvector. The latter goal can be achieved by replacing \eqref{RplusSL} with \begin{equation} \label{RplusGL} R_+=\frac1{1-\gamma}\pi_{>}-\frac{\gamma^*}{1-\gamma^*}\pi_{<}+ \frac 1 2 \pi_0 + \pi^* S\pi\pi_0, \end{equation} where $\pi$ is the projection to the space of traceless diagonal matrices given by $\pi(\zeta)= \zeta - \frac 1 n \operatorname{Tr}(\zeta)\mathbf 1$, $\pi^*$ is the adjoint to $\pi$ with respect to the restriction of the trace form to the space of diagonal matrices in $\mathfrak g\mathfrak l_n$, and $S$ is an operator on this space which is skew-symmetric with respect to the restriction of the trace form and satisfies \eqref{S-eq}.
The operator $S$ in \eqref{RplusGL} can be selected as follows.
\begin{lemma} \label{tildeS} The operator \begin{equation} \label{Sanswer} S = \frac 1 2 \left ( \frac 1 {1-\gamma} - \frac 1 {1 - \gamma^*} \right ) \end{equation} with $\gamma, \gamma^*$ understood as acting on the space of diagonal matrices in $\mathfrak g\mathfrak l_n$ is skew-symmetric with respect to the restriction of the trace form to this space and satisfies \eqref{S-eq}. \end{lemma}
\begin{proof} Rewrite \eqref{Sanswer} as \[ S = \frac 1 2 \frac {1+\gamma} {1-\gamma} - \frac 1 2 \left ( \frac \gamma {1-\gamma} + \frac 1 {1 - \gamma^*} \right ). \] The first term above clearly satisfies \eqref{S-eq}. The second term, multiplied by $(1-\gamma)$ on the right, becomes \[
- \frac 1 2 \left (\gamma + \frac 1 {1 - \gamma^*} (1-\gamma) \right ) = - \frac 1 2 \frac 1 {1 - \gamma^*} \left (1 - \gamma^*\gamma \right ) \] and vanishes on $\mathfrak h_{\Gamma_1}\subset \mathfrak h$ spanned by $\mathfrak h_\alpha, \alpha \in \Gamma_1$. \end{proof}
We can now compute \begin{align*} \pi^* S\pi(\zeta_0) &= S(\zeta_0) - \frac 1 n \left (\operatorname{Tr} (\zeta) S(\mathbf 1) + \operatorname{Tr} (S(\zeta_0)) \mathbf 1\right)\\ &=S(\zeta_0)- \frac 1 n \left (\operatorname{Tr} (\zeta) S(\mathbf 1) - \operatorname{Tr} (\zeta S(\mathbf 1)) \mathbf 1\right) \end{align*} and plug into \eqref{RplusGL} taking into account \eqref{Sanswer}, which gives \eqref{rplusfin}. Expression \eqref{bracket} is obtained from \eqref{sklyadouble} in the same way as formula (4.2) in \cite{GSVMem}. \end{proof}
\subsection{Handling functions in ${\tt F}$} It will be convenient to carry out all computations in the double with functions in ${\tt F}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, and to retrieve the statements for $F_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ via the restriction to the diagonal.
Recall that matrices ${\mathcal L}$ used for the definition of the collection ${\tt F}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ are built from $X$- and $Y$-blocks, see Section \ref{thebasis}. We will frequently use the following comparison statement, which is an easy consequence of the definitions, see Fig.~\ref{fig:nesting}.
\begin{proposition} \label{compar} Let $X_I^J$, $X_{I'}^{J'}$ be two $X$-blocks and $Y_{\bar I}^{\bar J}$, $Y_{\bar I'}^{\bar J'}$ be two $Y$-blocks.
{\rm (i)} If $\beta'<\beta$ (respectively, $\alpha'>\alpha$) then $X_{I'}^{J'}$ fits completely inside $X_I^J$; in particular, $\alpha'\ge\alpha$ (respectively, $\beta'\le\beta$).
{\rm (ii)} If $\bar\beta'>\bar\beta$ (respectively, $\bar\alpha'<\bar\alpha$) then $Y_{\bar I'}^{\bar J'}$ fits completely inside $Y_{\bar I}^{\bar J}$; in particular, $\bar\alpha'\le\bar\alpha$ (respectively, $\bar\beta'\ge\bar\beta$). \end{proposition}
\begin{figure}
\caption{Fitting of $X$- and $Y$-blocks}
\label{fig:nesting}
\end{figure}
Consider a matrix ${\mathcal L}$ defined by a maximal alternating path in $G_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$. Let us number the $X$-blocks along the path consecutively, so that the $t$-th $X$-block is denoted $X_{I_t}^{J_t}$. In a similar way we number the $Y$-blocks, so that the $t$-th $Y$-block is denoted $Y_{\bar I_t}^{\bar J_t}$. The glued blocks form a matrix ${\mathcal L}$ so that ${\mathcal L}_{K_t}^{L_t}=X_{I_t}^{J_t}$ and ${\mathcal L}_{\bar K_t}^{\bar L_t}=Y_{\bar I_t}^{\bar J_t}$, which we write as \begin{equation}\label{LLmat} {\mathcal L}=\sum_{t=1}^s X_{I_t\to K_t}^{J_t\to L_t}+ \sum_{t=1}^s Y_{\bar I_t\to\bar K_t}^{\bar J_t\to\bar L_t}. \end{equation}
According to the agreement above, if the $t$-th $X$-block is non-dummy, then the $t$-th $Y$-block lies immediately to the left of it, and if the $t$-th $Y$-block is non-dummy, then the $(t+1)$-th $X$-block lies immediately above it. In more detail, all $K_t$'s are disjoint, and the same holds for all $\bar K_t$'s; moreover, $K_t\cap \bar K_{t-1}=\varnothing$. If both $t$-th blocks are not dummy, put $\Phi_t=K_t\cap\bar K_t$. Then $\Phi_t\ne\varnothing$ corresponds to the nontrivial row runs $\Delta(\alpha_t)$ and $\bar\Delta(\bar\alpha_t)=\gamma^{{\rm r}}(\Delta(\alpha_t))$ along which the two blocks are glued. Consequently, $\Phi_t$ is the uppermost segment in $K_t$ and the lowermost segment in $\bar K_t$. If the first block is a dummy $X$-block and $\bar\Delta(\bar\alpha_1)$ is a nontrivial row $Y$-run, define $\Phi_1$ as the set of rows corresponding to $\bar\Delta(\bar\alpha_1)$; if this $Y$-run is trivial, put $\Phi_1=\varnothing$. Similarly, if the last block is a dummy $Y$-block and $\Delta(\alpha_s)$ is a nontrivial row $X$-run, define $\Phi_s$ as the set of rows corresponding to $\Delta(\alpha_s)$ and put $\bar I_s={\gamma^\er}(\Delta(\alpha_s))$; if this $X$-run is trivial, put $\Phi_s=\varnothing$. We put $K_1=\Phi_1$ for a dummy first $X$-block and $\bar K_s=\Phi_s$ for a dummy last $Y$-block to keep relation $\Phi_t=K_t\cap\bar K_t$ valid for dummy blocks as well.
\begin{figure}
\caption{The structure of ${\mathcal L}$}
\label{fig:ladder}
\end{figure}
Further, all $L_t$'s are disjoint, and the same holds for all $\bar L_t$'s; moreover, $L_t\cap\bar L_t=\varnothing$. For $2\le t\le s$, put $\Psi_t=L_{t}\cap\bar L_{t-1}$, then $\Psi_t\ne\varnothing$ corresponds to the nontrivial column runs $\bar\Delta(\bar\beta_{t-1})$ and $\Delta(\beta_t)=\gamma^{{\rm c}*}(\bar\Delta(\bar\beta_{t-1}))$. Consequently, $\Psi_t$ is the rightmost segment in $L_t$ and the leftmost segment in $\bar L_{t-1}$. If the first block is a non-dummy $X$-block and $\Delta(\beta_1)$ is a nontrivial column $X$-run, define $\Psi_1$ as the set of columns corresponding to $\Delta(\beta_1)$; if this $X$-run is trivial, or the block is dummy, define $\Psi_1=\varnothing$. Similarly, if the last block is a non-dummy $Y$-block and $\bar\Delta(\bar\beta_s)$ is a nontrivial column $Y$-run, define $\Psi_{s+1}$ as the set of columns corresponding to $\bar\Delta(\bar\beta_s)$ and put $J_{s+1}=\gamma^{{\rm c}*}(\bar\Delta(\bar\beta_{s}))$ (note that $J_{s+1}$ does not correspond to any $X$-block of ${\mathcal L}$); if this $Y$-run is trivial, or the block is dummy, define $\Psi_{s+1}=\varnothing$. We put $\bar L_0=\Psi_1$ and $L_{s+1}=\Psi_{s+1}$ to keep relation $\Psi_t=L_{t}\cap\bar L_{t-1}$ valid for $1\le t\le s+1$. The structure of the obtained matrix ${\mathcal L}$ is shown in Fig.~\ref{fig:ladder}.
It follows from \eqref{LLmat} that the gradients $\nabla_X g$ and $\nabla_Y g$ of a function $g=g({\mathcal L})$ can be written as \begin{equation}\label{naxnay} \nabla_X g=\sum_{t=1}^s (\nabla_{{\mathcal L}}g)^{K_t\to I_t}_{L_t\to J_t},\qquad \nabla_Y g=\sum_{t=1}^s (\nabla_{{\mathcal L}}g)^{\bar K_t\to \bar I_t}_{\bar L_t\to \bar J_t}. \end{equation} Note that unlike \eqref{LLmat}, the blocks in \eqref{naxnay} may overlap.
Direct computation shows that for $I=[\alpha,n]$, $J=[1,\beta]$, $\bar I=[1,\bar\alpha]$, $\bar J=[\bar\beta,n]$ one has \begin{equation}\label{xynaxy} X(\nabla_{{\mathcal L}}g)^{K\to I}_{L\to J}=\begin{bmatrix} 0 & \ast\\ 0 & X_I^J(\nabla_{{\mathcal L}}g)_L^K \end{bmatrix}, \qquad Y(\nabla_{{\mathcal L}} g)^{\bar K\to \bar I}_{\bar L\to \bar J} = \begin{bmatrix} Y_{\bar I}^{\bar J}(\nabla_{{\mathcal L}}g)_{\bar L}^{\bar K} & 0\\ \ast & 0 \end{bmatrix}. \end{equation} Here and in what follows we denote by an asterisk parts of matrices that are not relevant for further considerations. Note that the square block $X_I^J(\nabla_{{\mathcal L}}g)_L^K$ is the diagonal block defined by the index set $I$, whereas the square block $Y_{\bar I}^{\bar J}(\nabla_{{\mathcal L}}g)_{\bar L}^{\bar K}$ is the diagonal block defined by the index set $\bar I$.
Similarly, for $I$, $J$, $\bar I$, $\bar J$ as above, \begin{equation}\label{naxyxy} (\nabla_{{\mathcal L}}g)^{K\to I}_{L\to J}\cdot X=\begin{bmatrix} (\nabla_{{\mathcal L}}g)_L^K \cdot X_I^J & \ast \\ 0 & 0 \end{bmatrix}, \qquad (\nabla_{{\mathcal L}}g)^{\bar K\to \bar I}_{\bar L\to \bar J}\cdot Y= \begin{bmatrix} 0 & 0 \\ \ast & (\nabla_{{\mathcal L}}g)_{\bar L}^{\bar K}\cdot Y_{\bar I}^{\bar J} \end{bmatrix}, \end{equation} and the corresponding square blocks are diagonal blocks defined by the index sets $J$ and $\bar J$, respectively.
Let $N_+,N_-\in GL_n$ be arbitrary unipotent upper- and lower-triangular elements and $T_1,T_2\in H$ be arbitrary diagonal elements. It is easy to see that the structure of $X$- and $Y$-blocks as defined in Section~\ref{thebasis} and the way they are glued together, as shown in Fig.~\ref{fig:ladder}, imply that for any ${\tt f}\in {{\tt F}}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ one has \begin{equation}\label{2.1} {\tt f}\left(N_+X,\exp({\gamma^\er})(N_+)Y\right)={\tt f}\left(X\exp({\gamma^\ec}^*)(N_-),YN_-\right)={\tt f}(X,Y) \end{equation} and \begin{equation}\label{2.2}
{\tt f}\left((T_1X\exp({\gamma^\er}^*)(T_2),\exp({\gamma^\ec})(T_1)YT_2\right)=a^{\rm c}(T_1)a^{\rm r}(T_2) {\tt f}(X,Y), \end{equation} where $a^{\rm c}(T_1)$ and $a^{\rm r}(T_2)$ are constants depending only on $T_1$ and $T_2$, respectively.
It will be more convenient to work with the logarithms of the functions ${\tt f}\in {{\tt F}}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, instead of the functions ${\tt f}$ themselves. The corresponding infinitesimal form of the invariance properties \eqref{2.1} and~\eqref{2.2} reads: for any ${\tt f}\in {{\tt F}}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, \begin{equation}\label{infinv1} \left\langle \xi_R{\tt g}, n_+\right\rangle= \left\langle \xi_L{\tt g}, n_-\right\rangle=0 \end{equation}
and \begin{equation}\label{infinv2} (\xi_L{\tt g} )_0=\text{const},\quad (\xi_R{\tt g})_0=\text{const} \end{equation} with ${\tt g}=\log{\tt f}$. Additional invariance properties of the functions in ${{\tt F}}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ are given by the following statement.
\begin{lemma}\label{partrace} For any ${\tt f}\in {{\tt F}}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, any $X$-run $\Delta$ and any $Y$-run $\bar\Delta$, \begin{align*} \operatorname{Tr}(\nabla_X{\tt g}\cdot X)_\Delta^\Delta&= \rm{const}, \qquad \operatorname{Tr}(X\nabla_X{\tt g})_\Delta^\Delta =\rm{const},\\ \operatorname{Tr}(\nabla_Y{\tt g}\cdot Y)_{\bar\Delta}^{\bar\Delta}&= \rm{const}, \qquad \operatorname{Tr}(Y\nabla_Y{\tt g})_{\bar\Delta}^{\bar\Delta}= \rm{const} \end{align*} with ${\tt g}=\log{\tt f}$. \end{lemma}
\begin{proof} Consider for example the second equality above. Let $\mathbf 1_\Delta$ denote the diagonal $n\times n$ matrix whose entry $(j,j)$ equals $1$ if $j\in\Delta$ and $0$ otherwise. Condition $\operatorname{Tr}(X\nabla_X{\tt g})_\Delta^\Delta=a_\Delta$ for an integer constant $a_\Delta$ is the infinitesimal version of the equality \begin{equation}\label{globalpt} {\tt f}((\mathbf 1_n+(z-1)\mathbf 1_\Delta)X,Y)=z^{a_\Delta}{\tt f}(X,Y). \end{equation}
To establish the latter, recall that ${\tt f}(X,Y)$ is a principal minor of a matrix ${\mathcal L}\in{\mathbf L}$. Clearly, ${\tt f}((\mathbf 1_n+(z-1)\mathbf 1_\Delta)X,Y)$ represents the same principal minor in the matrix ${\mathcal L}(z)$ obtained from ${\mathcal L}$ via multiplying by $z$ every submatrix ${\mathcal L}^{L_t}_{R_t}$ such that the row set $R_t$ corresponds to the $X$-run $\Delta$. There are two types of such submatrices: those for which $R_t$ lies strictly below $\Phi_t$ and those for which $R_t$ coincides with $\Phi_t$ (the latter might happen only when the run $X$ is nontrivial). To perform the above operation on each submatrix of the first type it suffices to multiply ${\mathcal L}$ on the left by the diagonal matrix having $z$ in all positions corresponding to $R_t$ and $1$ in all other positions. To handle a submatrix of the second type, we multiply by $z$ all rows of ${\mathcal L}$ starting from the first one and ending at the lowest row in $\bar K_t$, and divide by $z$ all columns starting from the first one and ending at the rightmost column in $\bar L_t$, see Fig.~\ref{fig:ladder}. Clearly, this is equivalent to the left multiplication of ${\mathcal L}$ by a diagonal matrix whose entries are either $z$ or $1$ and the right multiplication of ${\mathcal L}$ by a diagonal matrix whose entries are either $z^{-1}$ or $1$. Consequently, every principal minor of ${\mathcal L}(z)$ is an integer power of $z$ times the corresponding minor of ${\mathcal L}$, and \eqref{globalpt} follows.
A similar reasoning shows that the remaining three equalities in the statement of the lemma hold as well.
\end{proof}
Furthermore, the following statement holds true.
\begin{lemma}\label{twomoreinv} For any ${\tt f}\in {{\tt F}}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, \begin{equation}\label{pigammac} \begin{aligned} \Pi_{\hat\Gamma_1^{\rm l}} (\nabla_X{\tt g}\cdot X)_0 &= \rm{const}, \qquad \Pi_{\hat\Gamma_1^{\rm l}} (X\nabla_X{\tt g})_0 =\rm{const},\\ \Pi_{\hat\Gamma_2^{\rm l}} (\nabla_Y{\tt g}\cdot Y)_0 &= \rm{const},\qquad \Pi_{\hat\Gamma_2^{\rm l}} (Y\nabla_Y{\tt g})_0 =\rm{const} \end{aligned} \end{equation} with ${\tt g}=\log{\tt f}$ and ${\rm l}={\rm c},{\rm r}$. \end{lemma}
\begin{proof} Same as in the proof of Lemma \ref{partrace}, we will only focus on the second equality in \eqref{pigammac}, since the other three can be treated in a similar way.
For any diagonal matrix $\zeta$ we have \begin{equation}\label{complproj}
\Pi_{\hat\Gamma_1^{\rm l}}(\zeta)=\sum_{\Delta}\frac1{|\Delta|}\operatorname{Tr} (\zeta_\Delta^\Delta)\mathbf 1_\Delta, \end{equation} where the sum is taken over all $X$-runs. Let $\zeta=(X\nabla_X{\tt g})_0$, then by Lemma~\ref{partrace} all terms in the sum above are constant. \end{proof}
\begin{corollary} {\rm(i)} For any ${\tt f}^i\in {{\tt F}}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, \begin{equation}\label{traces} \begin{aligned} \operatorname{Tr}(\nabla_X{\tt g}\cdot X)=\rm{const}, \qquad \operatorname{Tr}(X\nabla_X{\tt g})=\rm{const},\\ \operatorname{Tr}(\nabla_Y{\tt g}\cdot Y)=\rm{const}, \qquad \operatorname{Tr}(Y\nabla_Y{\tt g})=\rm{const} \end{aligned} \end{equation} with ${\tt g}=\log{\tt f}$.
{\rm(ii)} For any ${\tt f}\in {{\tt F}}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, \begin{equation}\label{infinv3} (\eta_L{\tt g})_0=\rm{const},\qquad (\eta_R{\tt g})_0=\rm{const} \end{equation} with ${\tt g}=\log{\tt f}$. \end{corollary}
\begin{proof} (i) Follows immediately form Lemma \ref{twomoreinv} and equality $\operatorname{Tr}\zeta=\operatorname{Tr}\Pi_{\hat\Gamma_1^{\rm l}}(\zeta)=\operatorname{Tr}\Pi_{\hat\Gamma_2^{\rm l}}(\zeta)$ for any $\zeta$ and ${\rm l}={\rm c},{\rm r}$.
(ii) Follows immediately form Lemma \ref{twomoreinv} and~\eqref{infinv2} via the last two relations in~\eqref{gammarel}. \end{proof}
\subsection{Proof of Theorem \ref{logcanbasis}: first steps}
Theorem \ref{logcanbasis} is an immediate corollary of the following result.
\begin{theorem} \label{logcandouble} For any ${\tt f}^1, {\tt f}^2\in {{\tt F}}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, the bracket $\{\log{\tt f}^1,\log{\tt f}^2\}^D$ is constant. \end{theorem}
The proof of the theorem is given in this and the following sections. It comprises a number of explicit formulas for the objects involved.
\subsubsection{Explicit expression for the bracket} Let us derive an explicit expression for $\{\log{\tt f}^1,\log{\tt f}^2\}^D$. To indicate that an operator is applied to a function $\log{\tt f}^i$, $i=1,2$, we add $i$ as an upper index of the corresponding operator, so that $\nabla^1_X X=\nabla_X \log{\tt f}^1\cdot X$, $E_L^2=E_L \log{\tt f}^2$, etc.
Let \begin{equation} \label{R0} R_0(\zeta)=-\frac12 \left (\frac{\gamma}{1-\gamma} + \frac{1}{1 - \gamma^*}\right ) \zeta_0 - \frac 1 n \left(\operatorname{Tr}(\zeta){\mathbf S} - \operatorname{Tr} \left(\zeta{\mathbf S}\right)\mathbf 1\right), \end{equation} for $\zeta\in\mathfrak g\mathfrak l_n$, cf.~\eqref{rplusfin}; clearly, $R_0(\zeta)$ is a diagonal matrix.
\begin{proposition} For any ${\tt f}^1, {\tt f}^2\in {{\tt F}}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, \begin{multline}\label{bra} \{\log{\tt f}^1,\log{\tt f}^2\}^D\\=\left\langle R_0^{\rm c}(E_L^1),E_L^2\right\rangle -\left\langle R_0^{\rm r}(E_R^1),E_R^2\right\rangle + \left\langle ( \xi_L^1)_0 , \frac{1}{1-{\gamma^\ec}^*} (\eta_L^2)_0 \right\rangle\\- \left\langle ( \eta_R^1)_0 , \frac{1}{1-{\gamma^\er}^*} (\xi_R^2)_0 \right\rangle+ \left\langle \Pi_{\hat\Gamma_2^{\rm c}}( \xi_L^1)_0 , \Pi_{\hat\Gamma_2^{\rm c}}(\nabla_Y^2 Y)_0 \right\rangle \\ -\left\langle(\eta_L^1)_{<},(\eta_L^2)_{>}\right\rangle- \left\langle(\eta_R^1)_{\ge},(\eta_R^2)_{\le}\right\rangle+ \left\langle{\gamma^\ec}^*(\xi_L^1)_{\le},{\gamma^\ec}^*(\nabla_Y^2 Y)\right\rangle+ \left\langle{\gamma^\er}(\xi_R^1)_{\ge},{\gamma^\er}(X\nabla_X^2)\right\rangle. \end{multline} \end{proposition}
\begin{proof}
First, it follows from Theorem \ref{doublebrack} that \begin{equation} \label{bracket1} \{\log{\tt f}^1,\log{\tt f}^2\}^D =\left\langle R^{\rm c}_+(E_L^1)-\nabla_X^1 X ,E_L^2\right\rangle-\left\langle R^{\rm r}_+(E_R^1) - X\nabla_{X}^1,E_R^2\right\rangle. \end{equation} By \eqref{gammarel} and \eqref{R0}, \begin{align*} R^{\rm c}_+(E_L^1) - \nabla_X^1 X &= R^{\rm c}_0(E_L^1) + \frac{1}{1-{\gamma^\ec}} ( \xi_L^1)_{\ge} - \frac{1}{1-{\gamma^\ec}^*} (\eta_L^1)_{<}\\ &= R^{\rm c}_0(E_L^1) + \frac{1}{1-{\gamma^\ec}} ( \xi_L^1)_0 - \frac{1}{1-{\gamma^\ec}^*}(\eta_L^1)_{<}; \end{align*} the second equality holds since $\xi_L^1 \in \mathfrak b_-$ by \eqref{infinv1}. Similarly, \begin{equation}\label{term1left} \begin{aligned} R^{\rm r}_+(E_R^1) - X \nabla_X^1 &= R^{\rm r}_0(E_R^1) + \frac{1}{1-{\gamma^\er}} ( \eta_R^1)_{\ge} - \frac{1}{1-{\gamma^\er}^*} (\xi_R^1)_{<}\\
&= R^{\rm r}_0(E_R^1) + \frac{1}{1-{\gamma^\er}} ( \eta_R^1)_{\ge}; \end{aligned} \end{equation} the second equality holds since $\xi_R^1 \in \mathfrak b_+$ by \eqref{infinv1}.
Consequently, the first term in \eqref{bracket1} is equal to \begin{equation} \label{term1} \left\langle R_0^{\rm c}(E_L^1), E_L^2 \right\rangle + \left\langle \frac{1}{1-{\gamma^\ec}} ( \xi_L^1)_0 , E_L^2 \right\rangle\\ - \left\langle\frac{1}{1-{\gamma^\ec}^*} (\eta_L^1)_{<}, E_L^2\right\rangle. \end{equation} The second term in \eqref{term1} can be re-written via \eqref{gammarel} as \begin{multline*}
\left\langle \frac{1}{1-{\gamma^\ec}} ( \xi_L^1)_0 , E_L^2 \right\rangle
= \left\langle ( \xi_L^1)_0 , \nabla_Y^2 Y + \frac{1}{1-{\gamma^\ec}^*} \eta_L^2 \right\rangle\\
= \left\langle ( \xi_L^1)_0 , \frac{1}{1-{\gamma^\ec}^*} \eta_L^2\right\rangle + \left\langle \Pi_{\hat\Gamma_2^{\rm c}}(\xi_L^1)_0, \Pi_{\hat\Gamma_2^{\rm c}}(\nabla_Y^2 Y) \right\rangle+ \left\langle \Pi_{\Gamma_2^{\rm c}}(\xi_L^1)_0, \Pi_{\Gamma_2^{\rm c}}(\nabla_Y^2 Y) \right\rangle\\
= \left\langle ( \xi_L^1)_0, \frac{1}{1-{\gamma^\ec}^*} (\eta_L^2)_0 \right\rangle + \left\langle \Pi_{\hat\Gamma_2^{\rm c}}(\xi_L^1)_0, \Pi_{\hat\Gamma_2^{\rm c}}(\nabla_Y^2 Y)_0 \right\rangle
+ \left\langle {\gamma^\ec}^*( \xi_L^1)_0 , {\gamma^\ec}^* (\nabla_Y^2 Y) \right\rangle, \end{multline*} where the last equality follows from \eqref{gammaid}.
We re-write the third term in \eqref{term1} as \begin{multline*}
\left\langle(\eta_L^1)_{<}, \frac{1}{1-{\gamma^\ec}} E_L^2\right\rangle = \left\langle (\eta_L^1)_{<}, \nabla_X^2 X + \frac{1}{1-{\gamma^\ec}} \xi_L^2\right\rangle = \left\langle(\eta_L^1)_{<}, \nabla_X^2 X )\right\rangle\\ = \left\langle (\eta_L^1)_{<}, \eta_L^2 \right\rangle - \left\langle (\eta_L^1)_{<}, {\gamma^\ec}^* (\nabla_Y^2 Y )\right\rangle = \left\langle (\eta_L^1)_{<}, \eta_L^2 \right\rangle - \left\langle {\gamma^\ec}^* (\xi_L^1)_{<}, {\gamma^\ec}^* (\nabla_Y^2 Y )\right\rangle, \end{multline*} where the second equality follows from \eqref{infinv1}, and the last equality, from \eqref{gammarel} and $\left\langle \Pi_{\hat\Gamma_1^{\rm c}}(A),{\gamma^\ec}^*(B)\right\rangle=0$ for any $A, B$.
Similarly, the second term in in \eqref{bracket1} is equal to \begin{multline}\label{term2} \left\langle R^{\rm r}_0(E_R^1), E_R^2\right\rangle + \left\langle\frac{1}{1-{\gamma^\er}}(\eta_R^1)_{\ge}, E_R^2\right\rangle \\ =\left\langle R_0^{\rm r}(E_R^1), E_R^2 \right\rangle + \left\langle ( \eta_R^1)_{\ge} , Y \nabla_Y^2 \right\rangle + \left\langle ( \eta_R^1)_0 , \frac{1}{1-{\gamma^\er}^*} (\xi_R^2)_0 \right\rangle\\ =\left\langle R_0^{\rm r}(E_R^1), E_R^2 \right\rangle + \left\langle ( \eta_R^1)_0 , \frac{1}{1-{\gamma^\er}^*} (\xi_R^2)_0 \right\rangle + \left\langle ( \eta_R^1)_{\ge} , \eta_R^2 \right\rangle - \left\langle {\gamma^\er} (\xi_R^1)_{\ge}, {\gamma^\er}(X\nabla_X^2 )\right\rangle. \end{multline}
Combining \eqref{term1}, \eqref{term2} and plugging the result into \eqref{bracket1}, we obtain \eqref{bra} as required. \end{proof}
\subsubsection{Diagonal contributions} Note that the third, the fourth and the fifth terms in \eqref{bra} are constant due to \eqref{infinv2} and \eqref{pigammac}. The first two terms are handled by the following statement.
\begin{lemma} \label{R0const} The quantities $\left\langle R_0(E_L^1), E_L^2 \right\rangle$ and $\left\langle R_0(E_R^1), E_R^2 \right\rangle$ are constant for any ${\tt f}^1, {\tt f}^2\in {{\tt F}}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$. \end{lemma}
\begin{proof} Let us start with \begin{multline} \label{R0L12} \left\langle R_0(E_L^1), E_L^2 \right\rangle = - \frac 1 2 \left\langle \left ( \frac {\gamma}{1-\gamma} + \frac{1} {1 - \gamma^*} \right ) (E_L^1)_0 , E_L^2 \right\rangle\\
- \frac 1 n \left ( \operatorname{Tr} ( E_L^1 ) \operatorname{Tr}(E_L^2{\mathbf S}) - \operatorname{Tr}(E_L^1{\mathbf S}) \operatorname{Tr} ( E_L^2 ) \right ), \end{multline} where $\gamma={\gamma^\ec}$. First, note that \begin{multline}\label{traceELS} \operatorname{Tr} ( E_L^i{\mathbf S}) = \left \langle E_L^i,\left (\frac {1}{1-\gamma} - \frac{1} {1 - \gamma^*}\right)\mathbf 1\right \rangle = \operatorname{Tr}\left ( \left ( \frac {1}{1-\gamma^*} - \frac{1} {1 - \gamma} \right ) E_L^i \right ) \\ = \operatorname{Tr}\left(\frac {1}{1-\gamma^*}\eta_L^i - \frac{1} {1 - \gamma} \xi_L^i + \nabla_Y^i Y - \nabla_X^i X \right ) =\text{const} \end{multline} for $i=1,2$ by \eqref{gammarel}, \eqref{infinv2}, \eqref{traces} and \eqref{infinv3}. Thus, the terms in the second line in \eqref{R0L12} are constant.
Next, by \eqref{gammarel}, \begin{equation}\label{for_frozen} \begin{aligned}
\left ( \frac {\gamma}{1-\gamma} + \frac{1} {1 - \gamma^*} \right ) E_L &= \frac {1}{1-\gamma} \xi_L + \frac{1} {1 - \gamma^*}\eta_L,\\ \left\langle\frac1{1-\gamma}\xi_L^1,E_L^2\right\rangle&= \left\langle \xi_L^1,\nabla_Y^2 Y+\frac1{1-\gamma^*}\eta_L^2\right\rangle,\\ \left\langle\frac1{1-\gamma^*}\eta_L^1,E_L^2\right\rangle&= \left\langle \eta_L^1,\nabla_X^2 X+\frac1{1-\gamma}\xi_L^2\right\rangle,\\ \end{aligned} \end{equation} and hence \begin{multline} \label{R0L} \left\langle \left ( \frac {\gamma}{1-\gamma} + \frac{1} {1 - \gamma^*} \right )(E_L^1)_0 , E_L^2 \right\rangle \\ = \left\langle (\xi_L^1)_0, \nabla_Y^2 Y+ \frac{1} {1 - \gamma^*}\eta_L^2 \right\rangle + \left\langle (\eta_L^1)_0, \nabla_X^2 X + \frac{1} {1 - \gamma}\xi_L^2 \right\rangle \\ = \left\langle (\xi_L^1)_0, \frac{1} {1 - \gamma^*}(\eta_L^2)_0 \right\rangle + \left\langle (\eta_L^1)_0, \frac{1} {1 - \gamma}(\xi_L^2)_0 \right\rangle + \left\langle (\xi_L^1)_0, (\xi_L^2 )_0 \right\rangle\\
+ \left\langle (\eta_L^1)_0, \nabla_X^2 X \right\rangle - \left\langle (\xi_L^1)_0, \gamma (\nabla_X^2 X) \right\rangle. \end{multline} Each of the three first terms in \eqref{R0L} is constant by \eqref{infinv2} and \eqref{infinv3}. Note that by \eqref{gammaid}, \[ \left\langle (\xi_L^1)_0, \gamma (\nabla_X^2 X)\right\rangle = \left\langle \gamma^*\gamma (\nabla_X^1 X)_0 + \gamma^*(\nabla_Y^1 Y)_0, \nabla_X^2 X \right\rangle =\left\langle \Pi_{\Gamma_1}(\eta_L^1)_0 , \nabla_X^2 X \right\rangle \] with $\Gamma_1=\Gamma_1^{\rm c}$, and so the last two terms in \eqref{R0L} combine into \[ \left\langle \Pi_{\hat\Gamma_1}(\eta_L^1)_0 , \Pi_{\hat\Gamma_1}(\nabla_X^2 X)_0\right\rangle, \] which is constant by \eqref{pigammac}.
Similarly,
\begin{multline} \label{R0R12} \left\langle R_0(E_R^1), E_R^2 \right\rangle = - \frac 1 2 \left\langle \left ( \frac {\gamma}{1-\gamma} + \frac{1} {1 - \gamma^*} \right ) (E_R^1)_0 , E_R^2 \right\rangle\\
- \frac 1 n \left ( \operatorname{Tr} ( E_R^1 ) \operatorname{Tr} (E_R^2{\mathbf S}) - \operatorname{Tr}(E_R^1 {\mathbf S}) \operatorname{Tr} ( E_R^2 ) \right ) \end{multline} with $\gamma={\gamma^\er}$. As before, \begin{multline*} \operatorname{Tr} (E_R^i{\mathbf S}) = \left \langle E_R^i , \left (\frac {1}{1-\gamma} - \frac{1} {1 - \gamma^*}\right )\mathbf 1 \right \rangle \\ = \operatorname{Tr}\left ( \frac {1}{1-\gamma^*}\xi_R^i - \frac{1} {1 - \gamma} \eta_R^i + Y \nabla_Y^i - X \nabla_X^i \right ) = \text{const} \end{multline*} for $i=1,2$, and \begin{multline*}
\left\langle \left(\frac {\gamma}{1-\gamma} + \frac{1} {1 - \gamma^*}\right)(E_R^1)_0 , E_R^2 \right\rangle \\ = \left\langle (\eta_R^1)_0, Y\nabla_Y^2 + \frac{1} {1 - \gamma^*}\xi_R^2 \right\rangle + \left\langle (\xi_R^1)_0, X\nabla_X^2 + \frac{1} {1 - \gamma}\eta_R^2 \right\rangle \\ = \left\langle (\eta_R^1)_0, \frac{1} {1 - \gamma^*}(\xi_R^2)_0 \right\rangle + \left\langle (\xi_R^1)_0, \frac{1} {1 - \gamma}(\eta_R^2)_0 \right\rangle + \left\langle (\xi_R^1)_0, (\xi_R^2 )_0 \right\rangle\\ + \left\langle (\eta_R^1)_0, Y\nabla_Y^2 \right\rangle - \left\langle (\xi_R^1)_0, \gamma^* (Y\nabla_Y^2) \right\rangle. \end{multline*} Each of the three first terms above is constant by \eqref{infinv2} and \eqref{infinv3}, while \[ \left\langle (\eta_R^1)_0, Y\nabla_Y^2 \right\rangle - \left\langle (\xi_R^1)_0, \gamma^* (Y\nabla_Y^2) \right\rangle= \left\langle \Pi_{\hat\Gamma_2}(\eta_R^1)_0 , \Pi_{\hat\Gamma_2}(Y\nabla_Y^2)_0 \right\rangle = \text{const} \] with $\Gamma_2=\Gamma_2^{\rm r}$. Thus, the right hand side of \eqref{R0R12} is constant as well, and we are done. \end{proof}
\subsubsection{Simplified version of the maps $\gamma$ and $\gamma^*$}\label{simplega} To proceed further, we define more ``accessible'' versions of the maps $\gamma$ and $\gamma^*$. Recall that $\mathfrak g_{\Gamma_{1}}$ and $\mathfrak g_{\Gamma_{2}}$ defined above are subalgebras of block-diagonal matrices with nontrivial traceless blocks determined by nontrivial runs of $\Gamma_{1}$ and $\Gamma_{2}$, respectively, and zeros everywhere else. Each diagonal component is isomorphic to $\mathfrak {sl}_k$, where $k$ is the size of the corresponding run. To modify the definition of $\gamma$, we first modify each nontrivial diagonal block in $\mathfrak g_{\Gamma_{1}}$ and $\mathfrak g_{\Gamma_{2}}$ from $\mathfrak {sl}_k$ to $\operatorname{Mat}_k$ by dropping the tracelessness condition. Next, ${\mathring{\gamma}}$ is defined as the projection from $\operatorname{Mat}_n$ onto the union of diagonal blocks specified by $\Gamma_1$, which are then moved to corresponding diagonal blocks specified by $\Gamma_2$. Similarly, the adjoint map ${\mathring{\gamma}}^*$ acts as the projection to $\operatorname{Mat}_{\Gamma_2}$ followed by a map that moves each diagonal block of $\operatorname{Mat}_{\Gamma_{2}}$ back to the corresponding diagonal block of $\operatorname{Mat}_{\Gamma_{1}}$. Consequently, ringed analogs of relations \eqref{gammaid} remain valid with $\mathring{\Pi}_{\Gamma_1}$ understood as the orthogonal projection to $\operatorname{Mat}_{\Gamma_{1}}$ and $\mathring{\Pi}_{\Gamma_2}$ as the orthogonal projection to $\operatorname{Mat}_{\Gamma_{2}}$. Further, we define $\mathring{\xi}_L$, $\mathring{\xi}_R$, $\mathring{\eta}_L$ and $\mathring{\eta}_R$ with ${\cgamma^{\rm r}}$ and ${{\cgamma^{\rm c}}}$ replacing ${\gamma^\er}$ and ${\gamma^\ec}$ and note that the ringed versions of the last two relations in \eqref{gammarel} remain valid with $\mathring{\Pi}_{\hat\Gamma_1}$ and $\mathring{\Pi}_{\hat\Gamma_2}$ being orthogonal projections complementary to $\mathring{\Pi}_{\Gamma_1}$ and $\mathring{\Pi}_{\Gamma_2}$, respectively. Observe that the ringed versions of the other four relations in \eqref{gammarel} are no longer true, since $1-{\mathring{\gamma}}$ and $1-{\mathring{\gamma}}^*$ might be non-invertible.
It is easy to see that ${\mathring{\gamma}}$ and ${\mathring{\gamma}}^*$ differ from $\gamma$ and $\gamma^*$, respectively, only on the diagonal. Consequently, invariance properties \eqref{2.1} and \eqref{infinv1} remain valid in ringed versions. Further, the ringed version of the invariance property \eqref{2.2} remains valid as well, albeit with different constants $a^{\rm c}(T_1)$ and $a^{\rm r}(T_2)$, which yields the ringed version of \eqref{infinv2}. Ringed relations \eqref{pigammac} also hold true: indeed, the sum in \eqref{complproj} is now taken only over trivial $X$-runs. As a corollary, we restore ringed versions of relations \eqref{infinv3}.
Recall that to complete the proof of Theorem \ref{logcandouble}, it remains to consider the four last terms in \eqref{bra}. The following observation plays a crucial role in handling these terms.
\begin{lemma}\label{difference} For each one of the last four terms in \eqref{bra}, the difference between the initial and the ringed version is constant. \end{lemma}
\begin{proof} Equality $\left\langle (\eta^1_L)_{<},(\eta_L^2)_{>}\right\rangle=\left\langle (\mathring{\eta}^1_L)_{<},(\mathring{\eta}_L^2)_{>}\right\rangle$ is trivial, since $\gamma^*$ and ${\mathring{\gamma}}^*$ coincide on $\mathfrak n_+$ and $\mathfrak n_-$.
For the second of the four terms, we have to consider the difference \begin{multline*} \left\langle (\mathring{\eta}^1_R)_{0},(\mathring{\eta}_R^2)_{0}\right\rangle- \left\langle (\eta^1_R)_{0},(\eta_R^2)_{0}\right\rangle= \left\langle {\cgamma^{\rm r}}(X\nabla_X^1)_{0}-{\gamma^\er}(X\nabla_X^1)_{0},(Y\nabla_Y^2)_{0}\right\rangle\\+ \left\langle (Y\nabla_Y^1)_{0},{\cgamma^{\rm r}}(X\nabla_X^2)_{0}-{\gamma^\er}(X\nabla_X^2)_{0}\right\rangle\\+ \left\langle ({\cgamma^{\rm r}}-{\gamma^\er})(X\nabla_X^1)_{0},{\cgamma^{\rm r}}(X\nabla_X^2)_{0}\right\rangle+ \left\langle {\gamma^\er}(X\nabla_X^1)_{0},({\cgamma^{\rm r}}-{\gamma^\er})(X\nabla_X^2)_{0}\right\rangle. \end{multline*} The first summand in the right hand side above equals \[
\sum_\Delta\frac1{|\Delta|}\operatorname{Tr}(X\nabla_X^1)_\Delta^\Delta\operatorname{Tr}(Y\nabla_Y^2)_{{\gamma^\er}(\Delta)}^{{\gamma^\er}(\Delta)}, \] where the sum is taken over all nontrivial row $X$-runs. By Lemma \ref{partrace}, each factor in this expression is constant, and hence the same holds true for the whole sum. The remaining three summands can be treated in a similar way.
The remaining two terms in \eqref{bra} are treated in the same way as the second term. \end{proof}
Based on Lemma \ref{difference}, from now on we proceed with the ringed versions of the last four terms in \eqref{bra}.
\subsubsection{Explicit expression for $\left\langle (\mathring{\eta}^1_L)_{<},(\mathring{\eta}_L^2)_{>}\right\rangle$}\label{etaletasec} Let ${\tt f}^i$ be the $l^i\times l^i$ trailing minor of ${\mathcal L}^i$, then \begin{equation}\label{lnal} {\mathcal L}^i\nabla_{{\mathcal L}}^i=\begin{bmatrix} 0 & \ast \\ 0 & \mathbf 1_{l^i} \end{bmatrix}, \qquad \nabla_{{\mathcal L}}^i{\mathcal L}^i =\begin{bmatrix} 0 & 0 \\ \ast & \mathbf 1_{l^i}\end{bmatrix}. \end{equation}
Denote $\hat l^i=N({\mathcal L}^i)-l^i+1$. From now on we assume without loss of generality that \begin{equation} \label{blockp} \hat l^1\in L^1_p\cup\bar L^1_{p-1}. \end{equation}
Consider the fixed block $X_{I^1_p}^{J^1_p}$ in ${\mathcal L}^1$ and an arbitrary block $X_{I^2_t}^{J^2_t}$ in ${\mathcal L}^2$. If $\beta_p^1>\beta_t^2$ then, by Proposition \ref{compar}(i) the second block fits completely inside the first one. This defines an injection $\rho$ of the subsets $K^2_t$ and $L^2_t$ of rows and columns of the matrix ${\mathcal L}^2$ into the subsets $K^1_p$ and $L^1_p$ of rows and columns of the matrix ${\mathcal L}^1$. Put \begin{align} \label{bad1} B^{\mbox{\tiny\rm I}}_t&=-\left\langle {\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi_t^2)}^{\rho(\Phi_t^2)} {(\L^2)}_{\Phi_t^2}^{{\bar L}_t^2}{\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{\Phi_t^2}\right\rangle,\\ \label{bad2} B^{\mbox{\tiny\rm II}}_t&=\left\langle {\left(\nabla_{\L}^1\L^1\right)}_{\rho(\Psi_t^2)}^{\rho(\Psi_t^2)} {\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{{\bar K}_{t-1}^2}{(\L^2)}_{{\bar K}^2_{t-1}}^{\Psi_t^2} \right\rangle,\\ \label{bad1eq} B^{\mbox{\tiny\rm III}}_t&= \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{L_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^2\right)}_{L_t^2\setminus\Psi_t^2}^{K_t^2}{(\L^2)}_{K_t^2}^{\Psi_t^2}\right\rangle. \end{align}
\begin{lemma}\label{etaletalemma} {\rm (i)} Expression $\left\langle (\mathring{\eta}^1_L)_{<},(\mathring{\eta}_L^2)_{>}\right\rangle$ is given by
\begin{multline}\label{etaleta} \left\langle (\mathring{\eta}^1_L)_{<},(\mathring{\eta}_L^2)_{>}\right\rangle=\sum_{\beta_t^2< \beta_p^1}\left(B^{\mbox{\tiny\rm I}}_t+B^{\mbox{\tiny\rm II}}_t\right)+ \sum_{\beta_t^2= \beta_p^1}B^{\mbox{\tiny\rm III}}_t\\ +\sum_{\beta_t^2< \beta_p^1}\left( \left\langle {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)}{\left(\L^2\nabla_{\L}^2\right)}_{K_t^2}^{K_t^2}\right\rangle- \left\langle {\left(\nabla_{\L}^1\L^1\right)}_{\rho(L_t^2)}^{\rho(L_t^2)} {\left(\nabla_{\L}^2\L^2\right)}_{L_t^2}^{L_t^2}\right\rangle\right) \end{multline} if $\hat l^1\in L_p^1$, and vanishes otherwise.
{\rm (ii)} Both summands in the last sum in \eqref{etaleta} are constant. \end{lemma}
\begin{remark} Since $\left\langle A_1A_2\dots,A^1A^2\dots\right\rangle=\operatorname{Tr}(A_1A_2\dots A^1A^2\dots)$, here and in what follows we omit the comma and write just $\left\langle A_1A_2\dots A^1A^2\dots\right\rangle$ whenever $A_1, A_2, \dots$ and $A^1, A^2, \dots$ are matrices given by explicit expressions. \end{remark}
\begin{proof}
First of all, write \begin{equation}\label{firstterm} \left\langle (\mathring{\eta}^1_L)_{<},(\mathring{\eta}_L^2)_{>}\right\rangle= \left\langle \mathring{\Pi}_{\Gamma_1}\left((\mathring{\eta}^1_L)_{<}\right),\mathring{\Pi}_{\Gamma_1}\left((\mathring{\eta}_L^2)_{>}\right)\right\rangle+ \left\langle \mathring{\Pi}_{\hat\Gamma_1}\left((\mathring{\eta}^1_L)_{<}\right),\mathring{\Pi}_{\hat\Gamma_1}\left((\mathring{\eta}_L^2)_{>}\right)\right\rangle \end{equation} with $\Gamma_1=\Gamma_1^{\rm c}$.
It follows from the ringed version of \eqref{gammaid} that for $i=1,2$, \begin{equation}\label{pigamma} \mathring{\Pi}_{\Gamma_1}(\mathring{\eta}_L^i)={\mathring{\gamma}}^*(\mathring{\xi}_L^i) \end{equation} with ${\mathring{\gamma}}={{\cgamma^{\rm c}}}$. Consequently, \[ \left\langle \mathring{\Pi}_{\Gamma_1}\left((\mathring{\eta}^1_L)_{<}\right),\mathring{\Pi}_{\Gamma_1}\left((\mathring{\eta}_L^2)_{>}\right)\right\rangle= \left\langle \mathring{\Pi}_{\Gamma_1}\left((\mathring{\eta}^1_L)_{<}\right),{\mathring{\gamma}}^*\left((\mathring{\xi}_L^2)_{>}\right)\right\rangle=0 \] via the ringed version of \eqref{infinv1}.
Note that $\mathring{\Pi}_{\hat\Gamma_1}\left({\mathring{\gamma}}^*(\nabla^i_Y Y)\right)=0$ by the definition of ${\mathring{\gamma}}^*$, therefore $\mathring{\Pi}_{\hat\Gamma_1}(\mathring{\eta}_L^i)= \mathring{\Pi}_{\hat\Gamma_1}(\nabla_X^i X)$.
Let us compute $\nabla_X^i X$. Taking into account \eqref{naxnay} and \eqref{naxyxy}, we get \begin{multline*} \nabla_X^i X=\sum_{t=1}^{s^i}\begin{bmatrix} (\nabla_{{\mathcal L}}^i)_{L_t^i}^{K_t^i}X_{I_t^i}^{J_t^i} & (\nabla_{{\mathcal L}}^i)_{L_t^i}^{K_t^i}X_{I_t^i}^{\hat J_t^i}\\0 & 0\end{bmatrix}\\= \sum_{t=1}^{s^i}\begin{bmatrix} (\nabla_{{\mathcal L}}^i{\mathcal L}^i)_{L_t^i}^{L_t^i\setminus\Psi_t^i} & (\nabla_{{\mathcal L}}^i)_{L_t^i}^{K_t^i}{\mathcal L}_{K_t^i}^{\Psi_t^i} & (\nabla_{{\mathcal L}}^i)_{L_t^i}^{K_t^i}X_{I_t^i}^{\hat J_t^i}\\ 0 & 0 & 0\end{bmatrix}, \end{multline*} where $\hat J_t^i=[1,n]\setminus J_t^i$. The latter equality follows from the fact that in columns $L_t^i\setminus\Psi_t^i$ all nonzero entries of ${\mathcal L}^i$ belong to the block $({\mathcal L}^i)_{K_t^i}^{L_t^i}=X_{I_t^i}^{J_t^i}$, whereas in columns $\Psi_t^i$ nonzero entries of ${\mathcal L}^i$ belong also to the block $({\mathcal L}^i)_{\bar K_{t-1}^i}^{\bar L_{t-1}^i}=Y_{\bar I_{t-1}^i}^{\bar J_{t-1}^i}$, see Fig.~\ref{fig:ladder}. In more detail, \begin{equation}\label{naxx} \nabla_X^i X=\sum_{t=1}^{s^i}\begin{bmatrix} (\nabla_{{\mathcal L}}^i{\mathcal L}^i)_{L_t^i\setminus\Psi_t^i}^{L_t^i\setminus\Psi_t^i} & (\nabla_{{\mathcal L}}^i)_{L_t^i\setminus\Psi_t^i}^{K_t^i}({\mathcal L}^i)_{K_t^i}^{\Psi_t^i} & (\nabla_{{\mathcal L}}^i)_{L_t^i\setminus \Psi_t^i}^{K_t^i}X_{I_t^i}^{\hat J_t^i}\\ (\nabla_{{\mathcal L}}^i{\mathcal L}^i)_{\Psi_t^i}^{L_t^i\setminus\Psi_t^i} & (\nabla_{{\mathcal L}}^i)_{\Psi_t^i}^{K_t^i}({\mathcal L}^i)_{K_t^i}^{\Psi_t^i} & (\nabla_{{\mathcal L}}^i)_{\Psi_t^i}^{K_t^i}X_{I_t^i}^{\hat J_t^i}\\ 0 & 0 & 0 \end{bmatrix}. \end{equation}
Note that the upper left block in \eqref{naxx} is lower triangular by \eqref{lnal}. Besides, the projection of the middle block onto $\hat\Gamma_1$ vanishes, since it corresponds to the diagonal block defined by the nontrivial $X$-run $\Delta(\beta_t^i)$ (or is void if $t=1$ and $\Psi_1^i=\varnothing$).
It follows from the explanations above and \eqref{lnal} that the contribution of the $t$-th summand in \eqref{naxx} to $\mathring{\Pi}_{\hat\Gamma_1}\left((\mathring{\eta}^1_L)_{<}\right)$ vanishes, unless $t=p$. Moreover, if $\hat l^1\in \bar L_{p-1}^1\setminus\Psi_p^1$, it vanishes for $t=p$ as well. So, in what follows we assume that $\hat l^1\in L_p^1$. In this case \eqref{naxx} yields \begin{equation}\label{pigalo} \mathring{\Pi}_{\hat\Gamma_1}\left((\mathring{\eta}_L^1)_{<}\right)=\mathring{\Pi}_{\hat\Gamma_1}\begin{bmatrix} \left((\nabla_{{\mathcal L}}^1{\mathcal L}^1)_{L_p^1}^{L_p^1}\right)_{<} & 0 \\ 0 & 0\end{bmatrix}. \end{equation}
On the other hand, \begin{equation}\label{pigaup} \mathring{\Pi}_{\hat\Gamma_1}\left((\mathring{\eta}_L^2)_{>}\right)=\sum_{t=1}^{s^2}\begin{bmatrix} 0 & (\nabla_{{\mathcal L}}^2)_{L_t^2\setminus\Psi_t^2}^{K_t^2}({\mathcal L}^2)_{K_t^2}^{\Psi_t^2} & (\nabla_{{\mathcal L}}^2)_{L_t^2\setminus \Psi_t^2}^{K_t^2}X_{I_t^2}^{\hat J^2_t}\\ 0 & 0 & (\nabla_{{\mathcal L}}^2)_{\Psi_t^2}^{K_t^2}X_{I_t^2}^{\hat J^2_t}\\ 0 & 0 & 0 \end{bmatrix}, \end{equation} where the $t$-th summand corresponds to the $t$-th $X$-block of ${\mathcal L}^2$.
If $\beta_p^1<\beta_t^2$, then the contribution of the $t$-th summand in \eqref{pigaup} to the second term in \eqref{firstterm} vanishes by \eqref{pigalo}, since in this case $J^1_p\subseteq J^2_t\setminus\Delta(\beta_t^2)$, which means that the upper left block in \eqref{pigalo} fits completely within the zero upper left block in \eqref{pigaup}.
Assume that $\beta_p^1>\beta_t^2$. Then, to the contrary, $J^2_t\subseteq J^1_p\setminus \Delta(\beta_p^1)$, and hence $\rho(L_t^2)\subseteq L_p^1\setminus\Psi_p^1$. Note that by \eqref{pigalo}, to compute the second term in \eqref{firstterm} one can replace $\hat J_t^2$ in \eqref{pigaup} by $J_p^1\setminus J_t^2$. So, using the above injection $\rho$, one can rewrite the two upper blocks at the $t$-th summand of $\mathring{\Pi}_{\hat\Gamma_1}\left((\mathring{\eta}^2_L)_{>}\right)$ in \eqref{pigaup} as one block \[ {\left(\nabla_{\L}^2\right)}_{L_t^2\setminus \Psi_t^2}^{K_t^2}{(\L^1)}_{\rho(K_t^2)}^{L_p^1\setminus \rho(L_t^2\setminus \Psi_t^2)}, \] and the remaining nonzero block in the same summand as \[ {\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{K_t^2}({\mathcal L}^1)_{\rho(K_t^2)}^{L_p^1\setminus \rho(L_t^2)}. \] The corresponding blocks of $\mathring{\Pi}_{\hat\Gamma_1}\left((\mathring{\eta}^1_L)_{<}\right)$ in \eqref{pigalo} are \[ {\left(\nabla_{\L}^1\L^1\right)}_{L_p^1\setminus \rho(L_t^2\setminus \Psi_t^2)}^{\rho(L_t^2\setminus\Psi_t^2)}= {\left(\nabla_{\L}^1\right)}_{L_p^1\setminus \rho(L_t^2\setminus \Psi_t^2)}^{K_p^1}{(\L^1)}_{K_p^1}^{\rho(L_t^2\setminus\Psi_t^2)} \] and \[ {\left(\nabla_{\L}^1\L^1\right)}_{L_p^1\setminus \rho(L_t^2)}^{\rho(\Psi_t^2)}= {\left(\nabla_{\L}^1\right)}_{L_p^1\setminus \rho(L_t^2)}^{K_p^1}{(\L^1)}_{K_p^1}^{\rho(\Psi_t^2)}. \] The equalities follow from the fact that all nonzero entries in the columns $\rho(L_t^2)$ of ${\mathcal L}^1$ belong to the $X$-block, see Fig.~\ref{fig:ladder}.
The contribution of the first blocks in each pair can be rewritten as \begin{equation}\label{cont1} \left\langle {(\L^1)}_{\rho(K_t^2)}^{L_p^1\setminus \rho(L_t^2\setminus \Psi_t^2)} {\left(\nabla_{\L}^1\right)}_{L_p^1\setminus \rho(L_t^2\setminus \Psi_t^2)}^{K_p^1} {(\L^1)}_{K_p^1}^{\rho(L_t^2\setminus\Psi_t^2)} {\left(\nabla_{\L}^2\right)}_{L_t^2\setminus \Psi_t^2}^{K_t^2}\right\rangle. \end{equation} Recall that $\rho(K_t^2)\subseteq K_p^1$. If the inclusion is strict, then immediately \begin{multline}\label{trick1} {(\L^1)}_{\rho(K_t^2)}^{L_p^1\setminus \rho(L_t^2\setminus \Psi_t^2)} {\left(\nabla_{\L}^1\right)}_{L_p^1\setminus \rho(L_t^2\setminus \Psi_t^2)}^{K_p^1}\\ ={\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{K_p^1}- {(\L^1)}_{\rho(K_t^2)}^{\rho\left(L_t^2\setminus \Psi_t^2\right)} {\left(\nabla_{\L}^1\right)}_{\rho\left(L_t^2\setminus \Psi_t^2\right)}^{K_p^1}\\ ={\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{K_p^1}- {(\L^2)}_{K_t^2}^{L_t^2\setminus \Psi_t^2} {\left(\nabla_{\L}^1\right)}_{\rho\left(L_t^2\setminus \Psi_t^2\right)}^{K_p^1}. \end{multline} Otherwise there is an additional term \[ -{(\L^1)}_{K_p^1}^{{\bar L}_p^1}{\left(\nabla_{\L}^1\right)}_{{\bar L}_p^1}^{K_p^1} \] in the right hand side of \eqref{trick1}. However, for the same reason as above, \[ {\left(\nabla_{\L}^1\right)}_{{\bar L}_p^1}^{K_p^1}{(\L^1)}_{K_p^1}^{\rho(L_t^2\setminus\Psi_t^2)}= {\left(\nabla_{\L}^1\L^1\right)}_{{\bar L}_p^1}^{\rho(L_t^2\setminus\Psi_t^2)}. \] Note that $\rho(L_t^2\setminus\Psi_t^2)\subset L_p^1$, and ${\bar L}_p^1$ lies strictly to the left of $L_p^1$, see Fig.~\ref{fig:ladder}. Consequently, by \eqref{lnal}, the latter submatrix vanishes. Therefore, the additional term does not contribute to \eqref{cont1}.
To find the contribution of the second term in \eqref{trick1} to \eqref{cont1}, note that \begin{equation} \label{tfor1} {\left(\nabla_{\L}^1\right)}_{\rho\left(L_t^2\setminus \Psi_t^2\right)}^{K_p^1}{(\L^1)}_{K_p^1}^{\rho(L_t^2\setminus\Psi_t^2)}= {\left(\nabla_{\L}^1\L^1\right)}_{\rho(L_t^2\setminus\Psi_t^2)}^{\rho(L_t^2\setminus\Psi_t^2)} \end{equation} and \[ {\left(\nabla_{\L}^2\right)}_{L_t^2\setminus \Psi_t^2}^{K_t^2}{(\L^2)}_{K_t^2}^{L_t^2\setminus \Psi_t^2}= {\left(\nabla_{\L}^2\L^2\right)}_{L_t^2\setminus \Psi_t^2}^{L_t^2\setminus \Psi_t^2} \] for the same reason as above, and hence the contribution in question equals \[ -\left\langle {\left(\nabla_{\L}^2\L^2\right)}_{L_t^2\setminus \Psi_t^2}^{L_t^2\setminus \Psi_t^2} {\left(\nabla_{\L}^1\L^1\right)}_{\rho(L_t^2\setminus\Psi_t^2)}^{\rho(L_t^2\setminus\Psi_t^2)}\right\rangle= \text{const} \] by \eqref{lnal}.
Similarly to \eqref{cont1}, \eqref{trick1}, the contribution of the second blocks in each pair can be rewritten as \begin{equation}\label{cont2} \left\langle {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{K_p^1}- {(\L^1)}_{\rho(K_t^2)}^{\rho(L_t^2)}{\left(\nabla_{\L}^1\right)}_{\rho(L_t^2)}^{K_p^1}, {(\L^1)}_{K_p^1}^{\rho(\Psi_t^2)}{\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{K_t^2}\right\rangle. \end{equation} As in the previous case, and additional term arises if $\rho(K_t^2)= K_p^1$, and its contribution to \eqref{cont2} vanishes.
Note that by \eqref{lnal}, one has \[ {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{K_p^1}{(\L^1)}_{K_p^1}^{\rho(L_t^2\setminus\Psi_t^2)}= {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)}{(\L^1)}_{\rho(K_t^2)}^{\rho(L_t^2\setminus\Psi_t^2)} \] and \[ {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{K_p^1}{(\L^1)}_{K_p^1}^{\rho(\Psi_t^2)}= {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)}{(\L^1)}_{\rho(K_t^2)}^{\rho(\Psi_t^2)}, \] hence the total contribution of the first terms in \eqref{trick1} and \eqref{cont2} equals \begin{multline}\label{tfor2} \left\langle {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)}, {(\L^1)}_{\rho(K_t^2)}^{\rho(L_t^2\setminus\Psi_t^2)}{\left(\nabla_{\L}^2\right)}_{L_t^2\setminus \Psi_t^2}^{K_t^2}+ {(\L^1)}_{\rho(K_t^2)}^{\rho(\Psi_t^2)}{\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{K_t^2}\right\rangle\\ =\left\langle {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)}, {(\L^2)}_{K_t^2}^{L_t^2\setminus\Psi_t^2}{\left(\nabla_{\L}^2\right)}_{L_t^2\setminus \Psi_t^2}^{K_t^2}+ {(\L^2)}_{K_t^2}^{\Psi_t^2}{\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{K_t^2}\right\rangle\\ =\left\langle {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)}, {\left(\L^2\nabla_{\L}^2\right)}_{K_t^2}^{K_t^2}-U_t{\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{K_t^2} \right\rangle, \end{multline} where \[ U_t=\begin{bmatrix} {(\L^2)}_{\Phi_t^2}^{{\bar L}_t^2} \\ 0\end{bmatrix}. \] Note that \[ \left\langle {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)} {\left(\L^2\nabla_{\L}^2\right)}_{K_t^2}^{K_t^2}\right\rangle=\text{const} \] by \eqref{lnal}, which gives the first summand in the last sum in \eqref{etaleta}. The remaining term equals \begin{multline*} -\left\langle {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)}U_t{\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{K_t^2}\right\rangle= -\left\langle {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(\Phi_t^2)} {(\L^2)}_{\Phi_t^2}^{{\bar L}_t^2}{\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{K_t^2}\right\rangle\\ =-\left\langle {\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi_t^2)}^{\rho(\Phi_t^2)} {(\L^2)}_{\Phi_t^2}^{{\bar L}_t^2}{\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{\Phi_t^2}\right\rangle, \end{multline*} which coincides with the expression for $B^{\mbox{\tiny\rm I}}_t$ in \eqref{bad1}; the last equality above follows from \eqref{lnal}.
It remains to compute the contribution of the second term in \eqref{cont2}. Similarly to \eqref{tfor1}, we have \[ {\left(\nabla_{\L}^1\right)}_{\rho\left(L_t^2\right)}^{K_p^1}{(\L^1)}_{K_p^1}^{\rho(\Psi_t^2)}= {\left(\nabla_{\L}^1\L^1\right)}_{\rho(L_t^2)}^{\rho(\Psi_t^2)}. \] On the other hand, similarly to \eqref{tfor2}, we have \[ {\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{K_t^2}{(\L^2)}_{K_t^2}^{L_t^2}= {\left(\nabla_{\L}^2\L^2\right)}_{\Psi_t^2}^{L_t^2}-{\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{{\bar K}_{t-1}^2}V_t, \] where \[ V_t=\begin{bmatrix} 0 & {(\L^2)}_{{\bar K}^2_{t-1}}^{\Psi_t^2}\end{bmatrix}. \] As before, we use \eqref{lnal} to get \[ -\left\langle {\left(\nabla_{\L}^1\L^1\right)}_{\rho(L_t^2)}^{\rho(\Psi_t^2)} {\left(\nabla_{\L}^2\L^2\right)}_{\Psi_t^2}^{L_t^2}\right\rangle= -\left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\rho(\Psi_t^2)}^{\rho(\Psi_t^2)} {\left(\nabla_{\L}^2\L^2\right)}_{\Psi_t^2}^{\Psi_t^2}\right\rangle=\text{const}, \] which together with the contribution of the second term in \eqref{trick1} computed above yields the second summand in the last sum in \eqref{etaleta}. The remaining term is given by \begin{equation*} \left\langle {\left(\nabla_{\L}^1\L^1\right)}_{\rho(L_t^2)}^{\rho(\Psi_t^2)} {\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{{\bar K}_{t-1}^2}V_t \right\rangle= \left\langle {\left(\nabla_{\L}^1\L^1\right)}_{\rho(\Psi_t^2)}^{\rho(\Psi_t^2)} {\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{{\bar K}_{t-1}^2}{(\L^2)}_{{\bar K}^2_{t-1}}^{\Psi_t^2} \right\rangle, \end{equation*} which coincides with the expression for $B^{\mbox{\tiny\rm II}}_t$ in \eqref{bad2}.
Assume now that $\beta_p^1=\beta_t^2$ and hence $J_p^1=J_t^2$. In this case the blocks $X_{I^2_t}^{J^2_t}$ and $X_{I^1_p}^{J^1_p}$ have the same width, and one of them lies inside the other, but the direction of the inclusion may vary, and hence $\rho$ is not defined.
Note that by \eqref{pigalo}, to compute the second term in \eqref{firstterm} in this case, one can omit the columns $\hat J_t^2$ in \eqref{pigaup}, and hence the contribution in question equals \begin{equation*} \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{L_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^2\right)}_{L_t^2\setminus\Psi_t^2}^{K_t^2}{(\L^2)}_{K_t^2}^{\Psi_t^2}\right\rangle, \end{equation*} which coincides with the expression for $B^{\mbox{\tiny\rm III}}_t$ in \eqref{bad1eq}. \end{proof}
\subsubsection{Explicit expression for $\left\langle (\mathring{\eta}_R^1)_{\ge},(\mathring{\eta}_R^2)_{\le}\right\rangle$}\label{etaretasec} Recall that $\hat l^1\in L_p^1\cup \bar L_{p-1}^1$ by \eqref{blockp}. Consequently, $\hat l^1\in K_p^1\cup\bar K_{p-1}^1$; more exactly, either $\hat l^1\in K_p^1\setminus \Phi_p^1$, or \begin{equation}\label{blockq} \hat l^1\in \bar K_q^1\quad\text{with $q=p$ or $q=p-1$}, \end{equation} see Fig.~\ref{fig:ladder}. Consider a fixed block $Y_{\bar I^1_q}^{\bar J^1_q}$ in ${\mathcal L}^1$ and an arbitrary block $Y_{\bar I^2_t}^{\bar J^2_t}$ in ${\mathcal L}^2$. If $\bar\alpha_q^1> \bar\alpha_t^2$ then, by Proposition \ref{compar}(ii) the second block fits completely inside the first one. This defines an injection $\sigma$ of the subsets $\bar K^2_t$ and $\bar L^2_t$ of rows and columns of the matrix ${\mathcal L}^2$ into the subsets $\bar K^1_{q}$ and $\bar L^1_{q}$ of rows and columns of the matrix ${\mathcal L}^1$. Put \begin{align}\label{bad3} \bar B^{\mbox{\tiny\rm I}}_t&=-\left\langle {\left(\nabla_{\L}^1\L^1\right)}_{\sigma(\Psi_{t+1}^2)}^{\sigma(\Psi_{t+1}^2)} {\left(\nabla_{\L}^2\right)}^{ K_{t+1}^2}_{\Psi_{t+1}^2}{(\L^2)}_{ K_{t+1}^2}^{\Psi_{t+1}^2}\right\rangle,\\ \label{bad4} \bar B^{\mbox{\tiny\rm II}}_t&=\left\langle {\left(\L^1\nabla_{\L}^1\right)}^{\sigma(\Phi_t^2)}_{\sigma(\Phi_t^2)} {(\L^2)}_{\Phi_t^2}^{L_t^2}{\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{L_{t}^2} \right\rangle,\\ \label{bad3eq} \bar B^{\mbox{\tiny\rm III}}_t&=\left\langle{\left(\L^1\nabla_{\L}^1\right)}^{\Phi_q^1}_{\bar K_q^1\setminus\Phi_q^1} {(\L^2)}^{\bar L_t^2}_{\Phi_t^2}{\left(\nabla_{\L}^2\right)}^{\bar K_t^2\setminus\Phi_t^2}_{\bar L_t^2}\right\rangle. \end{align}
\begin{lemma}\label{etaretalemma} {\rm (i)} Expression $\left\langle (\mathring{\eta}_R^1)_{\ge},(\mathring{\eta}_R^2)_{\le}\right\rangle$ is given by
\begin{multline}\label{etareta} \left\langle (\mathring{\eta}_R^1)_{\ge},(\mathring{\eta}_R^2)_{\le}\right\rangle=\left\langle (\mathring{\eta}_R^1)_{0},(\mathring{\eta}_R^2)_{0}\right\rangle +\sum_{\bar\alpha_t^2 < \bar\alpha_q^1}\left(\bar B^{\mbox{\tiny\rm I}}_t+\bar B^{\mbox{\tiny\rm II}}_t\right)+ \sum_{ \bar\alpha_t^2=\bar\alpha_q^1}\bar B^{\mbox{\tiny\rm III}}_t\\ +\sum_{\bar\alpha_t^2 < \bar\alpha_q^1}\left( \left\langle {\left(\nabla_{\L}^1\L^1\right)}^{\sigma(\bar L_t^2)}_{\sigma(\bar L_t^2)} {\left(\nabla_{\L}^2\L^2\right)}^{\bar L_t^2}_{\bar L_t^2}\right\rangle -\left\langle {\left(\L^1\nabla_{\L}^1\right)}^{\sigma(\bar K_t^2)}_{\sigma(\bar K_t^2)} {\left(\L^2\nabla_{\L}^2\right)}^{\bar K_t^2}_{\bar K_t^2}\right\rangle\right) \end{multline} if $\hat l^1\in\bar K_q^1$, and equals $\left\langle (\mathring{\eta}_R^1)_{0},(\mathring{\eta}_R^2)_{0}\right\rangle$ otherwise.
{\rm (ii)} The first term and both summands in the last sum in the right hand side of \eqref{etareta} are constant. \end{lemma}
\begin{proof}
Clearly, $\left\langle (\mathring{\eta}_R^1)_{\ge},(\mathring{\eta}_R^2)_{\le}\right\rangle= \left\langle (\mathring{\eta}_R^1)_{0},(\mathring{\eta}_R^2)_{0}\right\rangle+ \left\langle (\mathring{\eta}_R^1)_{>},(\mathring{\eta}_R^2)_{<}\right\rangle$. The first term on the right is constant by the ringed version of \eqref{infinv3}, so in what follows we only look at the second term. Similarly to \eqref{firstterm}, we have \begin{equation}\label{secondterm} \left\langle (\mathring{\eta}^1_R)_{>},(\mathring{\eta}_R^2)_{<}\right\rangle= \left\langle \mathring{\Pi}_{\Gamma_2}\left((\mathring{\eta}^1_R)_{>}\right),\mathring{\Pi}_{\Gamma_2}\left((\mathring{\eta}_R^2)_{<}\right)\right\rangle+ \left\langle \mathring{\Pi}_{\hat\Gamma_2}\left((\mathring{\eta}^1_R)_{>}\right),\mathring{\Pi}_{\hat\Gamma_2}\left((\mathring{\eta}_R^2)_{<}\right)\right\rangle \end{equation} with $\Gamma_2=\Gamma_2^{\rm r}$.
It follows from the ringed version of \eqref{gammaid} that for $i=1,2$, \begin{equation}\label{pigamma2} \mathring{\Pi}_{\Gamma_2}(\mathring{\eta}_R^i)={\mathring{\gamma}}(\mathring{\xi}_R^i) \end{equation} with ${\mathring{\gamma}}={\cgamma^{\rm r}}$. Consequently, \[ \left\langle \mathring{\Pi}_{\Gamma_2}\left((\mathring{\eta}^1_R)_{>}\right),\mathring{\Pi}_{\Gamma_2}\left((\mathring{\eta}_R^2)_{<}\right)\right\rangle= \left\langle \mathring{\Pi}_{\Gamma_2}\left((\mathring{\eta}^1_R)_{>}\right),{\mathring{\gamma}}\left((\mathring{\xi}_R^2)_{<}\right)\right\rangle=0 \] via the ringed version of \eqref{infinv1}.
Note that $\mathring{\Pi}_{\hat\Gamma_2}\left({\mathring{\gamma}}(X\nabla^i_X)\right)=0$ by the definition of ${\mathring{\gamma}}$, therefore $\mathring{\Pi}_{\hat\Gamma_2}(\mathring{\eta}_R^i)= \mathring{\Pi}_{\hat\Gamma_2}(Y\nabla_Y^i)$.
Let us compute $Y\nabla_Y^i$. Taking into account \eqref{naxnay} and \eqref{xynaxy}, we get \begin{equation*} Y\nabla_Y^i =\sum_{t=1}^{s^i}\begin{bmatrix} Y_{\bar I_t^i}^{\bar J_t^i}(\nabla_{{\mathcal L}}^i)_{\bar L_t^i}^{\bar K_t^i} & 0 \\ Y_{\hat{\bar I}_t^i}^{\bar J_t^i}(\nabla_{{\mathcal L}}^i)_{\bar L_t^i}^{\bar K_t^i} & 0\end{bmatrix}= \sum_{t=1}^{s^i}\begin{bmatrix} ({\mathcal L}\nabla_{{\mathcal L}}^i)^{\bar K_t^i}_{\bar K_t^i\setminus\Phi_t^i} & 0\\ ({\mathcal L}^i)^{\bar L_t^i}_{\Phi_t^i}(\nabla_{{\mathcal L}}^i)^{\bar K_t^i}_{\bar L_t^i} & 0\\ Y_{\hat{\bar I}_t^i}^{\bar J_t^i}(\nabla_{{\mathcal L}}^i)_{\bar L_t^i}^{\bar K_t^i} & 0\end{bmatrix}, \end{equation*} where $\hat{\bar I}_t^i=[1,n]\setminus \bar I_t^i$; the latter equality follows from the fact that in rows $\bar K_t^i\setminus\Phi_t^i$ all nonzero entries of ${\mathcal L}^i$ belong to the block $({\mathcal L}^i)_{\bar K_t^i}^{\bar L_t^i}=Y_{\bar I_t^i}^{\bar J_t^i}$, whereas in rows $\Phi_t^i$ nonzero entries of ${\mathcal L}^i$ belong also to the block $({\mathcal L}^i)_{K_{t}^i}^{L_{t}^i}=X_{I_{t}^i}^{J_{t}^i}$, see Fig.~\ref{fig:ladder}. In more detail, \begin{equation}\label{ynay} Y\nabla_Y^i=\sum_{t=1}^{s^i}\begin{bmatrix} ({\mathcal L}^i\nabla_{{\mathcal L}}^i)^{\bar K_t^i\setminus\Phi_t^i}_{\bar K_t^i\setminus\Phi_t^i} & ({\mathcal L}^i\nabla_{{\mathcal L}}^i)^{\Phi_t^i}_{\bar K_t^i\setminus\Phi_t^i} & 0\\ ({\mathcal L}^i)^{\bar L_t^i}_{\Phi_t^i}(\nabla_{{\mathcal L}}^i)^{\bar K_t^i\setminus\Phi_t^i}_{\bar L_t^i} & ({\mathcal L}^i)^{\bar L_t^i}_{\Phi_t^i}(\nabla_{{\mathcal L}}^i)^{\Phi_t^i}_{\bar L_t^i} & 0 \\ Y^{\bar J_t^i}_{\hat{\bar I}^i_t} (\nabla_{{\mathcal L}}^i)^{\bar K_t^i\setminus \Phi_t^i}_{\bar L_t^i} & Y^{\bar J_t^i}_{\hat{\bar I}^i_t}(\nabla_{{\mathcal L}}^i)^{\Phi_t^i}_{\bar L_t^i} & 0\end{bmatrix}. \end{equation}
Note that the upper left block in \eqref{ynay} is upper triangular by \eqref{lnal}. Besides, the projection of the middle block onto $\hat\Gamma_2$ vanishes, since for $\Phi_t^i\ne\varnothing$, the middle block corresponds to the diagonal block defined by the nontrivial $Y$-run $\bar\Delta(\bar\alpha_t^i)$.
Recall that $\hat l^1\in K_p^1\cup\bar K_{p-1}^1$, therefore by \eqref{lnal}, the contribution of the $t$-th summand in \eqref{ynay} to $\mathring{\Pi}_{\hat\Gamma_2}\left((\mathring{\eta}_R^1)_{>}\right)$ vanishes, unless $t\ne q$, where $q$ is either $p$ or $p-1$. Moreover, if $\hat l^1\in K_p^1\setminus\Phi_p^1$, this contribution vanishes for $t=q$ as well, see Fig.~\ref{fig:ladder}. So, in what follows $\hat l^1\in \bar K_q^1$, in which case \begin{equation}\label{pigalo2} \mathring{\Pi}_{\hat\Gamma_2}\left((\mathring{\eta}_R^1)_{>}\right)=\mathring{\Pi}_{\hat\Gamma_2} \begin{bmatrix} \left(({\mathcal L}^1\nabla_{{\mathcal L}}^1)^{\bar K_q^1}_{\bar K_q^1}\right)_{>} & 0\\0 & 0 \end{bmatrix}. \end{equation} On the other hand, \begin{equation}\label{pigaup2} \mathring{\Pi}_{\hat\Gamma_2}\left((\mathring{\eta}_R^2)_{<}\right)=\sum_{t=1}^{s^2} \begin{bmatrix} 0 & 0 & 0 \\ ({\mathcal L}^2)^{\bar L_t^2}_{\Phi_t^2}(\nabla_{{\mathcal L}}^2)^{\bar K_t^2\setminus\Phi_t^2}_{\bar L_t^2} & 0 & 0 \\ Y^{\bar J_t^2}_{\hat{\bar I}^2_t} (\nabla_{{\mathcal L}}^2)^{\bar K_t^2\setminus \Phi_t^2}_{\bar L_t^2} & Y^{\bar J_t^2}_{\hat{\bar I}^2_t}(\nabla_{{\mathcal L}}^2)^{\Phi_t^2}_{\bar L_t^2} & 0\end{bmatrix}, \end{equation} where the $t$-th summand corresponds to the $t$-th $Y$-block in ${\mathcal L}^2$.
If $\bar\alpha_q^1< \bar\alpha_t^2$, then the contribution of the $t$-th summand in \eqref{pigaup2} to the second term in \eqref{secondterm} vanishes by \eqref{pigalo2}, since in this case $\bar I^1_{q}\subseteq \bar I^2_t\setminus\bar\Delta(\bar\alpha^2_t)$.
Assume that $\bar\alpha_q^1> \bar\alpha_t^2$. Then, to the contrary, $\bar I^2_t\subseteq \bar I^1_{q}\setminus\bar\Delta(\bar\alpha_q^1)$, and hence $\sigma(\bar K_t^2)\subseteq \bar K_q^1\setminus \Phi_q^1$. Note that by \eqref{pigalo2}, to compute the second term in \eqref{secondterm}, one can replace $\hat{\bar I}_t^2$ in \eqref{pigaup2} by $\bar I_{q}^1\setminus \bar I_t^2$. So, using the above injection $\sigma$, one can rewrite the two upper blocks at the $t$-th summand of $\mathring{\Pi}_{\hat\Gamma_2}\left((\mathring{\eta}^2_R)_{<}\right)$ in \eqref{pigaup2} as one block \[ {(\L^1)}^{\sigma(\bar L_t^2)}_{\bar K_{q}^1\setminus \sigma(\bar K_t^2\setminus\Phi_t^2)} {\left(\nabla_{\L}^2\right)}^{\bar K_t^2\setminus \Phi_t^2}_{\bar L_t^2}, \] and the remaining nonzero block in the same summand as \[ {(\L^1)}^{\sigma(\bar L_t^2)}_{\bar K_{q}^1\setminus \sigma(\bar K_t^2)} {\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{\bar L_t^2}. \] The corresponding blocks of $\mathring{\Pi}_{\hat\Gamma_2}\left((\mathring{\eta}^1_R)_{>}\right)$ in \eqref{pigalo2} are \[ {\left(\L^1\nabla_{\L}^1\right)}^{\bar K_{q}^1\setminus \sigma(\bar K_t^2\setminus\Phi_t^2)} _{\sigma(\bar K_t^2\setminus\Phi_t^2)}= {(\L^1)}^{\bar L_{q}^1}_{\sigma(\bar K_t^2\setminus\Phi_t^2)} {\left(\nabla_{\L}^1\right)}^{\bar K_{q}^1\setminus \sigma(\bar K_t^2\setminus\Phi_t^2)}_{\bar L_{q}^1} \] and \[ {\left(\L^1\nabla_{\L}^1\right)}^{\bar K_{q}^1\setminus \sigma(\bar K_t^2)}_{\sigma(\Phi_t^2)}= {(\L^1)}^{\bar L_{q}^1}_{\sigma(\Phi_t^2)} {\left(\nabla_{\L}^1\right)}^{\bar K_{q}^1\setminus \sigma(\bar K_t^2)}_{\bar L_{q}^1}. \] The equalities follow from the fact that all nonzero entries in the rows $\sigma(\bar K_t^2)$ of ${\mathcal L}^1$ belong to the $Y$-block, see Fig.~\ref{fig:ladder}.
The contribution of the first blocks in each pair can be rewritten as \begin{equation}\label{cont12} \left\langle {\left(\nabla_{\L}^1\right)}^{\bar K_{q}^1\setminus \sigma(\bar K_t^2\setminus\Phi_t^2)}_{\bar L_{q}^1} {(\L^1)}^{\sigma(\bar L_t^2)}_{\bar K_{q}^1\setminus \sigma(\bar K_t^2\setminus\Phi_t^2)} {\left(\nabla_{\L}^2\right)}^{\bar K_t^2\setminus \Phi_t^2}_{\bar L_t^2} {(\L^1)}^{\bar L_{q}^1}_{\sigma(\bar K_t^2\setminus\Phi_t^2)} \right\rangle. \end{equation} Recall that $\sigma(\bar L_t^2)\subseteq \bar L_{q}^1$. If the inclusion is strict, then immediately \begin{multline}\label{trick12} {\left(\nabla_{\L}^1\right)}^{\bar K_{q}^1\setminus \sigma(\bar K_t^2\setminus\Phi_t^2)}_{\bar L_{q}^1} {(\L^1)}^{\sigma(\bar L_t^2)}_{\bar K_{q}^1\setminus \sigma(\bar K_t^2\setminus\Phi_t^2)}\\ ={\left(\nabla_{\L}^1\L^1\right)}^{\sigma(\bar L_t^2)}_{\bar L_{q}^1}- {\left(\nabla_{\L}^1\right)}^{\sigma\left(\bar K_t^2\setminus \Phi_t^2\right)}_{\bar L_{q}^1} {(\L^1)}^{\sigma(\bar L_t^2)}_{\sigma\left(\bar K_t^2\setminus \Phi_t^2\right)}\\ ={\left(\nabla_{\L}^1\L^1\right)}^{\sigma(\bar L_t^2)}_{\bar L_{q}^1}- {\left(\nabla_{\L}^1\right)}^{\sigma\left(\bar K_t^2\setminus \Phi_t^2\right)}_{\bar L_{q}^1} {(\L^2)}^{\bar L_t^2}_{\bar K_t^2\setminus \Phi_t^2}. \end{multline} Otherwise there is an additional term \[ -{\left(\nabla_{\L}^1\right)}^{K_{q}^1}_{\bar L_{q}^1}{(\L^1)}^{\bar L_{q}^1}_{K_{q}^1} \] in the right hand of \eqref{trick12}. However, for the same reason as those discussed during the treatment of \eqref{cont1}, \[ {(\L^1)}^{\bar L_{q}^1}_{\sigma(\bar K_t^2\setminus\Phi_t^2)} {\left(\nabla_{\L}^1\right)}^{K_{q}^1}_{\bar L_{q}^1}= {\left(\L^1\nabla_{\L}^1\right)}^{K_{q}^1}_{\sigma(\bar K_t^2\setminus\Phi_t^2)}. \] Note that $\sigma(\bar K_t^2\setminus\Phi_t^2)\subseteq \bar K_{q}^1\setminus\Phi_q^1$ and $K_{q}^1$ lies strictly below $\bar K_{q}^1\setminus\Phi_q^1$, see Fig.~\ref{fig:ladder}. Hence by \eqref{lnal} the above submatrix vanishes, and the additional term does not contribute to \eqref{cont12}.
To find the contribution of the second term in \eqref{trick12} to \eqref{cont12}, note that \begin{equation}\label{tfor3} {(\L^1)}^{\bar L_{q}^1}_{\sigma(\bar K_t^2\setminus\Phi_t^2)} {\left(\nabla_{\L}^1\right)}^{\sigma\left(\bar K_t^2\setminus \Phi_t^2\right)}_{\bar L_{q}^1}= {\left(\L^1\nabla_{\L}^1\right)}^{\sigma(\bar K_t^2\setminus\Phi_t^2)}_{\sigma(\bar K_t^2\setminus\Phi_t^2)} \end{equation} and \[ {(\L^2)}^{\bar L_t^2}_{\bar K_t^2\setminus \Phi_t^2} {\left(\nabla_{\L}^2\right)}^{\bar K_t^2\setminus \Phi_t^2}_{\bar L_t^2}= {\left(\L^2\nabla_{\L}^2\right)}^{\bar K_t^2\setminus \Phi_t^2}_{\bar K_t^2\setminus \Phi_t^2}, \] and hence the contribution in question equals \[ -\left\langle {\left(\L^2\nabla_{\L}^2\right)}^{\bar K_t^2\setminus \Phi_t^2}_{\bar K_t^2\setminus \Phi_t^2} {\left(\L^1\nabla_{\L}^1\right)}^{\sigma(\bar K_t^2\setminus\Phi_t^2)}_{\sigma(\bar K_t^2\setminus\Phi_t^2)}\right\rangle= \text{const} \] by \eqref{lnal}.
Similarly to \eqref{cont2}, the contribution of the second blocks in each pair above can be rewritten as \begin{equation}\label{cont22} \left\langle {\left(\nabla_{\L}^1\L^1\right)}^{\sigma(\bar L_t^2)}_{\bar L_{q}^1}- {\left(\nabla_{\L}^1\right)}^{\sigma(\bar K_t^2)}_{\bar L_{q}^1} {(\L^1)}^{\sigma(\bar L_t^2)}_{\sigma(\bar K_t^2)}, {\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{\bar L_t^2}{(\L^1)}^{\bar L_{q}^1}_{\sigma(\Phi_t^2)}\right\rangle. \end{equation} As in the previous case, an additional term arises if $\sigma(\bar L_t^2)= \bar L_{q}^1$, and its contribution to \eqref{cont22} vanishes.
To find the total contribution of the first terms in \eqref{trick12} and \eqref{cont22}, note that by \eqref{lnal}, in this computation one can replace the row set $\bar L_q^1$ of ${\mathcal L}^1\nabla_{{\mathcal L}^1}$ with $\sigma(\bar L_t^2)$. Therefore, the contribution in question equals \begin{multline}\label{tfor4} \left\langle {\left(\nabla_{\L}^1\L^1\right)}^{\sigma(\bar L_t^2)}_{\sigma(\bar L_t^2)}, {\left(\nabla_{\L}^2\right)}^{\bar K_t^2\setminus \Phi_t^2}_{\bar L_t^2} {(\L^1)}^{\sigma(\bar L_t^2)}_{\sigma(\bar K_t^2\setminus\Phi_t^2)} +{\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{\bar L_t^2}{(\L^1)}^{\sigma(\bar L_t^2)}_{\sigma(\Phi_t^2)} \right\rangle\\ =\left\langle {\left(\nabla_{\L}^1\L^1\right)}^{\sigma(\bar L_t^2)}_{\sigma(\bar L_t^2)}, {\left(\nabla_{\L}^2\right)}^{\bar K_t^2\setminus \Phi_t^2}_{\bar L_t^2} {(\L^2)}^{\bar L_t^2}_{\bar K_t^2\setminus\Phi_t^2}+ {\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{\bar L_t^2}{(\L^2)}^{\bar L_t^2}_{\Phi_t^2} \right\rangle\\ =\left\langle {\left(\nabla_{\L}^1\L^1\right)}^{\sigma(\bar L_t^2)}_{\sigma(\bar L_t^2)}, {\left(\nabla_{\L}^2\L^2\right)}^{\bar L_t^2}_{\bar L_t^2}- {\left(\nabla_{\L}^2\right)}^{K_{t+1}^2}_{\bar L_t^2}W_t \right\rangle, \end{multline} where \[ W_t=\begin{bmatrix} {(\L^2)}_{K_{t+1}^2}^{\Psi_{t+1}^2} & 0\end{bmatrix}. \] Note that \[ \left\langle {\left(\nabla_{\L}^1\L^1\right)}^{\sigma(\bar L_t^2)}_{\sigma(\bar L_t^2)} {\left(\nabla_{\L}^2\L^2\right)}^{\bar L_t^2}_{\bar L_t^2}\right\rangle=\text{const} \] by \eqref{lnal}, which gives the first summand in the last sum in \eqref{etareta}. The remaining term is given by \begin{equation*} -\left\langle {\left(\nabla_{\L}^1\L^1\right)}_{\sigma(\bar L_t^2)}^{\sigma(\bar L_t^2)} {\left(\nabla_{\L}^2\right)}^{ K_{t+1}^2}_{\bar L_t^2}W_t\right\rangle= -\left\langle {\left(\nabla_{\L}^1\L^1\right)}_{\sigma(\Psi_{t+1}^2)}^{\sigma(\Psi_{t+1}^2)} {\left(\nabla_{\L}^2\right)}^{ K_{t+1}^2}_{\Psi_{t+1}^2}{(\L^2)}_{ K_{t+1}^2}^{\Psi_{t+1}^2}\right\rangle, \end{equation*} which coincides with the expression for $\bar B^{\mbox{\tiny\rm I}}_t$ in \eqref{bad3}.
It remains to compute the contribution of the second term in \eqref{cont22}. Similarly to \eqref{tfor3}, we have \[ {(\L^1)}^{\bar L_{q}^1}_{\sigma(\Phi_t^2)} {\left(\nabla_{\L}^1\right)}^{\sigma\left(\bar K_t^2\right)}_{\bar L_{q}^1}= {\left(\L^1\nabla_{\L}^1\right)}^{\sigma(\bar K_t^2)}_{\sigma(\Phi_t^2)}. \] On the other hand, similarly to \eqref{tfor4}, we have \[ {(\L^2)}^{\bar L_t^2}_{\bar K_t^2}{\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{\bar L_t^2}= {\left(\L^2\nabla_{\L}^2\right)}^{\Phi_t^2}_{\bar K_t^2}- Z_t{\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{{ L}_{t}^2}, \] where \[ Z_t=\begin{bmatrix} 0 \\ {(\L^2)}^{{L}^2_{t}}_{\Phi_t^2}\end{bmatrix}. \] Using \eqref{lnal} once again, we get \[ -\left\langle {\left(\L^1\nabla_{\L}^1\right)}^{\sigma(\bar K_t^2)}_{\sigma(\Phi_t^2)} {\left(\L^2\nabla_{\L}^2\right)}^{\Phi_t^2}_{\bar K_t^2}\right\rangle= -\left\langle{\left(\L^1\nabla_{\L}^1\right)}^{\sigma(\Phi_t^2)}_{\sigma(\Phi_t^2)} {\left(\L^2\nabla_{\L}^2\right)}^{\Phi_t^2}_{\Phi_t^2}\right\rangle=\text{const}, \] which together with the contribution of the second term in \eqref{trick12} computed above yields the second summand in the last sum in \eqref{etareta}. The remaining term is given by \begin{equation*} \left\langle {\left(\L^1\nabla_{\L}^1\right)}^{\sigma(\bar K_t^2)}_{\sigma(\Phi_t^2)} Z_t{\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{{L}_{t}^2}\right\rangle= \left\langle {\left(\L^1\nabla_{\L}^1\right)}^{\sigma(\Phi_t^2)}_{\sigma(\Phi_t^2)} {(\L^2)}_{\Phi_t^2}^{L_t^2}{\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{L_{t}^2} \right\rangle, \end{equation*} which coincides with the expression for $\bar B^{\mbox{\tiny\rm II}}_t$ in \eqref{bad4}.
Assume now that $\bar\alpha_t^2=\bar\alpha_q^1$ and hence $\bar I^2_t= \bar I^1_{q}$. In this case the blocks $Y^{\bar J^2_t}_{\bar I^2_t}$ and $Y^{\bar J^1_{q}}_{\bar I^1_{q}}$ have the same height, and one of them lies inside the other, but the direction of the inclusion may vary, and hence $\sigma$ is not defined.
Note that by \eqref{pigalo2}, to compute the second term in \eqref{secondterm} in this case, one can omit the rows $\hat{\bar I}_t^2$ in \eqref{pigaup2}, and hence the contribution in question equals \begin{equation*} \left\langle{\left(\L^1\nabla_{\L}^1\right)}^{\Phi_q^1}_{\bar K_q^1\setminus\Phi_q^1} {(\L^2)}^{\bar L_t^2}_{\Phi_t^2}{\left(\nabla_{\L}^2\right)}^{\bar K_t^2\setminus\Phi_t^2}_{\bar L_t^2}\right\rangle, \end{equation*} which coincides with the expression for $\bar B^{\mbox{\tiny\rm III}}_t$ in \eqref{bad3eq}. \end{proof}
\subsubsection{Explicit expression for $\left\langle{\mathring{\gamma}}^{{\rm c}*}(\mathring{\xi}_L^1)_{\le},{\mathring{\gamma}}^{{\rm c}*}(\nabla_Y^2 Y)\right\rangle$} \label{xinaysection} Assume that $p$ and $q$ are defined by \eqref{blockp} and \eqref{blockq}, respectively, and let $\sigma$ be the injection of $\bar K_t^2$ and $\bar L_t^2$ into $\bar K_q^1$ and $\bar L_q^1$, respectively, defined at the beginning of Section \ref{etaretasec}. Put \begin{equation}\label{bad5} \bar B^{\mbox{\tiny\rm IV}}_{t}=\left\langle {\left(\nabla_{\L}^1\L^1\right)}_{\sigma(\Psi_{t+1}^2)}^{\sigma(\Psi_{t+1}^2)} {\left(\nabla_{\L}^2\right)}^{\bar K_{t}^2}_{\Psi_{t+1}^2}{(\L^2)}_{\bar K_{t}^2}^{\Psi_{t+1}^2}\right\rangle. \end{equation}
\begin{lemma}\label{xinaylemma} {\rm (i)} Expression $\left\langle{\mathring{\gamma}}^{{\rm c}*}(\mathring{\xi}_L^1)_{\le},{\mathring{\gamma}}^{{\rm c}*}(\nabla_Y^2 Y)\right\rangle$ is given by
\begin{multline}\label{xinay} \left\langle{\mathring{\gamma}}^{{\rm c}*}(\mathring{\xi}_L^1)_{\le},{\mathring{\gamma}}^{{\rm c}*}(\nabla_Y^2 Y)\right\rangle= \sum_{\beta_t^2\le \beta_p^1 }B^{\mbox{\tiny\rm II}}_t+\sum_{ \bar\beta_{t}^2>\bar\beta_{p-1}^1 }\bar B^{\mbox{\tiny\rm IV}}_{t}\\ +\sum_{u=1}^{p} \sum_{t=1}^{s^2} \left\langle(\nabla_{{\mathcal L}}^1{\mathcal L}^1)_{L_u^1\to J_u^1}^{L_u^1\to J_u^1}, {\mathring{\gamma}}^{{\rm c}*}(\nabla_{{\mathcal L}}^2{\mathcal L}^2)_{\bar L_{t}^2\setminus\Psi_{t+1}^2\to \bar J_{t}^2\setminus\bar\Delta(\bar\beta_{t}^2)} ^{\bar L_{t}^2\setminus\Psi_{t+1}^2\to \bar J_{t}^2\setminus\bar\Delta(\bar\beta_{t}^2)}\right\rangle\\ +\sum_{u=1}^{p-1} \sum_{t=1}^{s^2} \left\langle (\nabla_{{\mathcal L}}^1{\mathcal L}^1)_{\bar L_{u}^1\setminus\Psi_{u+1}^1\to \bar J_{u}^1\setminus\bar\Delta(\bar\beta_{u}^1)} ^{\bar L_{u}^1\setminus\Psi_{u+1}^1\to \bar J_{u}^1\setminus\bar\Delta(\bar\beta_{u}^1)}, \mathring{\Pi}_{\Gamma_2^{\rm c}}(\nabla_{{\mathcal L}}^2{\mathcal L}^2)_{\bar L_{t}^2\setminus\Psi_{t+1}^2\to \bar J_{t}^2\setminus\bar\Delta(\bar\beta_{t}^2)} ^{\bar L_{t}^2\setminus\Psi_{t+1}^2\to \bar J_{t}^2\setminus\bar\Delta(\bar\beta_{t}^2)}\right\rangle\\
+\sum_{t=1}^{s^2}\left(|\{u<p: \beta_u^1\ge \beta_{t+1}^2\}|+ |\{u< p: \bar\beta_{u-1}^1< \bar\beta_t^2\}| \right) \left\langle{\left(\nabla_{\L}^2\right)}_{\Psi_{t+1}^2}^{\bar K_{t}^2}{(\L^2)}_{\bar K_{t}^2}^{\Psi_{t+1}^2}\right\rangle, \end{multline} where $B^{\mbox{\tiny\rm II}}_t$ is given by \eqref{bad2} with $\rho(\Phi_t^2)$ replaced by $\Phi_p^1$ for $\beta_p^1= \beta_t^2$, and $\bar B^{\mbox{\tiny\rm IV}}_{t}$ is given by \eqref{bad5}.
{\rm (ii)} Each summand in the last three sums in \eqref{xinay} is constant. \end{lemma}
\begin{proof}
Recall that by \eqref{pigamma}, this term can be rewritten as $\left\langle\mathring{\Pi}_{\Gamma_1}(\mathring{\eta}^1_L)_\le, {\mathring{\gamma}}^*(\nabla_Y^2 Y)\right\rangle$ with $\Gamma_1=\Gamma_1^{\rm c}$ and ${\mathring{\gamma}}={{\cgamma^{\rm c}}}$.
Note that $\nabla_X^i X$ has been already computed in \eqref{naxx}. Let us compute ${\mathring{\gamma}}^*(\nabla_Y^i Y)$. Taking into account \eqref{naxnay} and \eqref{naxyxy}, we get \begin{equation*} {\mathring{\gamma}}^*(\nabla_Y^i Y)=\sum_{t=2}^{s^i+1}{\mathring{\gamma}}^*\begin{bmatrix} 0 & 0\\ \ast & (\nabla_{{\mathcal L}}^i)_{\bar L_{t-1}^i}^{\bar K_{t-1}^i}Y_{\bar I_{t-1}^i}^{\bar J_{t-1}^i} \end{bmatrix}= \sum_{t=2}^{s^i+1}{\mathring{\gamma}}^*\begin{bmatrix} 0 & 0\\ \ast & (\nabla_{{\mathcal L}}^i)_{\Psi_t^i}^{\bar K_{t-1}^i}({\mathcal L}^i)_{\bar K_{t-1}^i}^{\bar L_{t-1}^i}\\ \ast & (\nabla_{{\mathcal L}}^i{\mathcal L}^i)_{\bar L_{t-1}^i\setminus\Psi_t^i}^{\bar L_{t-1}^i}\end{bmatrix}; \end{equation*} the latter equality is similar to the one used in the derivation of the expression for $\nabla_X^i X$ in the proof of Lemma \ref{etaletalemma}. In more detail, \begin{multline}\label{ganayy} {\mathring{\gamma}}^*(\nabla_Y^i Y)=\\ \sum_{t=2}^{s^i+1}{\mathring{\gamma}}^*\begin{bmatrix} 0 & 0 & 0 \\ 0 & (\nabla_{{\mathcal L}}^i)_{\Psi_t^i}^{\bar K_{t-1}^i}({\mathcal L}^i)_{\bar K_{t-1}^i}^{\Psi_t^i} & 0 \\ 0 & 0 & 0\end{bmatrix}+ \sum_{t=2}^{s^i+1}{\mathring{\gamma}}^*\begin{bmatrix} 0 & 0 \\ 0 &(\nabla_{{\mathcal L}}^i{\mathcal L}^i)_{\bar L_{t-1}^i\setminus\Psi_t^i}^{\bar L_{t-1}^i\setminus\Psi_t^i}\end{bmatrix}. \end{multline}
Note that the diagonal block in the first term in \eqref{ganayy} corresponds to the nontrivial column $Y$-run $\bar\Delta(\bar\beta_{t-1}^i)$, unless $t=s^i+1$ and $\Psi_{s^i+1}^i=\varnothing$. Therefore, ${\mathring{\gamma}}^*$ moves it to the diagonal block corresponding to the nontrivial column $X$-run $\Delta(\beta_t^i)$ occupied by $(\nabla_{{\mathcal L}}^i)_{\Psi_t^i}^{K_t^i}({\mathcal L}^i)_{K_t^i}^{\Psi_t^i}$ in \eqref{naxx}.
Consequently, the resulting diagonal block in $\mathring{\eta}_L^i$ is equal to \begin{equation}\label{compli} (\nabla_{{\mathcal L}}^i)_{\Psi_t^i}^{K_t^i}({\mathcal L}^i)_{K_t^i}^{\Psi_t^i}+(\nabla_{{\mathcal L}}^i)_{\Psi_t^i}^{\bar K_{t-1}^i} ({\mathcal L}^i)_{\bar K_{t-1}^i}^{\Psi_t^i}=(\nabla_{{\mathcal L}}^i{\mathcal L}^i)_{\Psi_t^i}^{\Psi_t^i} \end{equation} for $1\le t\le s^i+1$; note that the first term in the left hand side of \eqref{compli} vanishes for $t=s^i+1$, and the second term vanishes for $t=1$.
Further, the projection $\mathring{\Pi}_{\Gamma_1}$ of the second block in the first row of \eqref{naxx} vanishes. Summing up and applying \eqref{lnal}, we get \begin{equation}\label{leftt} \mathring{\Pi}_{\Gamma_1}(\mathring{\eta}_L^1)_\le= \sum_{u=1}^{s^1+1} \mathring{\Pi}_{\Gamma_1} \begin{bmatrix} (\nabla_{{\mathcal L}}^1{\mathcal L}^1)_{L_u^1}^{L_u^1} & 0\\ 0 & 0\end{bmatrix}+ \sum_{u=2}^{s^1+1} {\mathring{\gamma}}^*\begin{bmatrix} 0 & 0\\ 0 & (\nabla_{{\mathcal L}}^1{\mathcal L}^1)_{\bar L_{u-1}^1\setminus\Psi_u^1}^{\bar L_{u-1}^1\setminus\Psi_u^1}\end{bmatrix}. \end{equation}
Recall that $\hat l^1\in L_p^1\cup \bar L_{p-1}^1$ by \eqref{blockp}. Therefore, for any $u>p$ both terms in \eqref{leftt} vanish. Consequently, by the ringed version of \eqref{gammaid}, the contribution of the second term in expression \eqref{ganayy} for the second function to the final result equals \begin{multline*} \sum_{u=1}^{p} \sum_{t=1}^{s^2} \left\langle (\nabla_{{\mathcal L}}^1{\mathcal L}^1)_{L_u^1\to J_u^1}^{L_u^1\to J_u^1}, {\mathring{\gamma}}^*(\nabla_{{\mathcal L}}^2{\mathcal L}^2)_{\bar L_{t}^2\setminus\Psi_{t+1}^2\to \bar J_{t}^2\setminus\bar\Delta(\bar\beta_{t}^2)} ^{\bar L_{t}^2\setminus\Psi_{t+1}^2\to \bar J_{t}^2\setminus\bar\Delta(\bar\beta_{t}^2)}\right\rangle\\ +\sum_{u=1}^{p-1} \sum_{t=1}^{s^2} \left\langle (\nabla_{{\mathcal L}}^1{\mathcal L}^1)_{\bar L_{u}^1\setminus\Psi_{u+1}^1\to \bar J_{u}^1\setminus\bar\Delta(\bar\beta_{u}^1)} ^{\bar L_{u}^1\setminus\Psi_{u+1}^1\to \bar J_{u}^1\setminus\bar\Delta(\bar\beta_{u}^1)}, \mathring{\Pi}_{\Gamma_2} (\nabla_{{\mathcal L}}^2{\mathcal L}^2)_{\bar L_{t}^2\setminus\Psi_{t+1}^2\to \bar J_{t}^2\setminus\bar\Delta(\bar\beta_{t}^2)} ^{\bar L_{t}^2\setminus\Psi_{t+1}^2\to \bar J_{t}^2\setminus\bar\Delta(\bar\beta_{t}^2)}\right\rangle, \end{multline*} which yields the third and the fourth sums in \eqref{xinay}. Note that each summand in both sums is constant by \eqref{lnal}.
Further, for any $u<p$, the nonzero blocks in both terms in \eqref{leftt} are just identity matrices by \eqref{lnal}. Hence, the corresponding contribution of the first term in expression \eqref{ganayy} for the second function to the final result equals \begin{equation}\label{trace}
\sum_{t=1}^{s^2}\left(|\{u<p: \beta_u^1\ge \beta_{t+1}^2\}|+ |\{u<p: \bar\beta_{u-1}^1< \bar\beta_t^2\}| \right) \left\langle{\left(\nabla_{\L}^2\right)}_{\Psi_{t+1}^2}^{\bar K_{t}^2}{(\L^2)}_{\bar K_{t}^2}^{\Psi_{t+1}^2}\right\rangle, \end{equation} which yields the fifth sum in \eqref{xinay}. It follows immediately from the proof of Lemma \ref{partrace} that the trace $\left\langle(\nabla_{{\mathcal L}})_{\Psi_{t+1}}^{\bar K_{t}}{\mathcal L}_{\bar K_{t}}^{\Psi_{t+1}}\right\rangle$ is a constant.
Finally, let $u=p$. Let us find the contribution of the first term in \eqref{leftt}. From now on we are looking at the $t$-th summand in the first term of \eqref{ganayy} for the second function. If $\beta_p^1<\beta_t^2$ then the contribution of this summand vanishes for the same size considerations as in the proof of Lemma \ref{etaletalemma}.
If $\beta_p^1>\beta_t^2$ then the contribution in question equals \begin{equation*} \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\rho(\Psi_t^2)}^{\rho(\Psi_t^2)} {\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{\bar K_{t-1}^2}{(\L^2)}^{\Psi_t^2}_{\bar K_{t-1}^2}\right\rangle, \end{equation*} which coincides with $B^{\mbox{\tiny\rm II}}_t$ given by \eqref{bad2} and yields the first sum in \eqref{xinay}.
If $\beta_p^1=\beta_t^2$ then the contribution in question remains the same as in the previous case with $\rho(\Phi_t^2)$ replaced by $\Phi_p^1$.
Let us find the contribution of the second term in \eqref{leftt}. Note that ${\mathring{\gamma}}^*$ enters both the second term in \eqref{leftt} and the first term in \eqref{ganayy}, consequently, we can drop it in the former and replace by $\mathring{\Pi}_{\Gamma_2}$ in the latter, which effectively means that ${\mathring{\gamma}}^*$ is simultaneously dropped in both terms.
From now on we are looking at the $t$-th summand in the first term of \eqref{ganayy}. However, since we have dropped ${\mathring{\gamma}}^*$, this means that we are comparing the $(t-1)$-st $Y$-block in ${\mathcal L}^2$ with the $(p-1)$-st $Y$-block in ${\mathcal L}^1$. If $\bar\beta_{p-1}^1\geq \bar\beta_{t-1}^2$ then the contribution of this summand vanishes for the same size considerations as before.
If $\bar\beta_{p-1}^1< \bar\beta_{t-1}^2$, then the contribution in question equals \begin{equation*} \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\sigma(\Psi_{t}^2)}^{\sigma(\Psi_{t}^2)} {\left(\nabla_{\L}^2\right)}_{\Psi_{t}^2}^{\bar K_{t-1}^2}{(\L^2)}^{\Psi_{t}^2}_{\bar K_{t-1}^2}\right\rangle, \end{equation*} which coincides with $\bar B^{\mbox{\tiny\rm IV}}_{t-1}$ given by \eqref{bad5}, and hence yields the second sum in \eqref{xinay}.
\end{proof}
\subsubsection{Explicit expression for $\left\langle{\mathring{\gamma}}^{{\rm r}}(\mathring{\xi}_R^1)_{\ge},{\mathring{\gamma}}^{{\rm r}}(X\nabla_X^2 )\right\rangle$} \label{xinaxsection} Assume that $p$, $q$, and $\sigma$ are the same as in Section \ref{xinaysection} and $\rho$ be the injection of $K_t^2$ and $L_t^2$ into $K_p^1$ and $L_p^1$, respectively, defined at the beginning of Section \ref{etaletasec}. Put \begin{equation}\label{bad6} B^{\mbox{\tiny\rm IV}}_t=\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi_t^2)}^{\rho(\Phi_t^2)} {(\L^2)}_{\Phi_t^2}^{L_{t}^2}{\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{L_{t}^2}\right\rangle. \end{equation}
\begin{lemma}\label{xinaxlemma} {\rm (i)} Expression $\left\langle{\mathring{\gamma}}^{{\rm r}}(\mathring{\xi}_R^1)_{\ge},{\mathring{\gamma}}^{{\rm r}}(X\nabla_X^2 )\right\rangle$ is given by
\begin{multline}\label{xinax} \left\langle{\mathring{\gamma}}^{{\rm r}}(\mathring{\xi}_R^1)_{\ge},{\mathring{\gamma}}^{{\rm r}}(X\nabla_X^2 )\right\rangle= \sum_{\bar\alpha_t^2\le \bar\alpha_{p-1}^1}\bar B^{\mbox{\tiny\rm II}}_t +\sum_{\bar\alpha_t^2\le \bar\alpha_p^1}\bar B^{\mbox{\tiny\rm II}}_t +\sum_{\alpha_t^2>\alpha_p^1}B^{\mbox{\tiny\rm IV}}_{t}\\ +\sum_{u=1}^{p} \sum_{t=1}^{s^2} \left\langle({\mathcal L}^1\nabla_{{\mathcal L}}^1)_{\bar K_u^1\to \bar I_u^1}^{\bar K_u^1\to \bar I_u^1}, {\cgamma^{\rm r}}({\mathcal L}^2\nabla_{{\mathcal L}}^2)_{K_{t}^2\setminus\Phi_{t}^2\to I_{t}^2\setminus\Delta(\alpha_{t}^2)} ^{K_{t}^2\setminus\Phi_{t}^2\to I_{t}^2\setminus\Delta(\alpha_{t}^2)} \right\rangle\\ +\sum_{u=1}^{p} \sum_{t=1}^{s^2} \left\langle ({\mathcal L}^1\nabla_{{\mathcal L}}^1)_{K_{u}^1\setminus\Phi_{u}^1\to I_{u}^1\setminus\Delta(\alpha_{u}^1)} ^{K_{u}^1\setminus\Phi_{u}^1\to I_{u}^1\setminus\Delta(\alpha_{u}^1)}, \mathring{\Pi}_{\Gamma_1^{\rm r}}({\mathcal L}^2\nabla_{{\mathcal L}}^2)_{K_{t}^2\setminus\Phi_{t}^2\to I_{t}^2\setminus\Delta(\alpha_{t}^2)} ^{K_{t}^2\setminus\Phi_{t}^2\to I_{t}^2\setminus\Delta(\alpha_{t}^2)} \right\rangle\\
+\sum_{t=1}^{s^2}\left(|\{u<p-1: \bar\alpha_u^1\ge \bar\alpha_t^2\}|+ |\{u< p: \alpha_u^1< \alpha_t^2\}| \right) \left\langle{(\L^2)}^{L_{t}^2}_{\Phi_t^2}{\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{L_{t}^2}\right\rangle, \end{multline} where $\bar B^{\mbox{\tiny\rm II}}_t$ is given by \eqref{bad4} with $\sigma(\Phi_t^2)$ replaced by $\Phi_q^1$ for $\bar\alpha_q^1=\bar\alpha_t^2$, and $B^{\mbox{\tiny\rm IV}}_t$ is given by \eqref{bad6}.
{\rm (ii)} Each summand in the last three sums in \eqref{xinay} is constant. \end{lemma}
\begin{proof} Recall that by \eqref{pigamma2}, this term can be rewritten as $\left\langle\mathring{\Pi}_{\Gamma_2}(\mathring{\eta}^1_R)_\ge, {\mathring{\gamma}}(X\nabla_X^2)\right\rangle$ with $\Gamma_2=\Gamma_2^{\rm r}$ and ${\mathring{\gamma}}={\cgamma^{\rm r}}$.
Note that $Y\nabla_Y^i$ has been already computed in \eqref{ynay}. Let us compute ${\mathring{\gamma}}(X\nabla_X^i)$. Taking into account \eqref{naxnay} and \eqref{xynaxy}, we get \begin{equation}\label{gaxnax} {\mathring{\gamma}}(X\nabla_X^i)=\\ \sum_{t=1}^{s^i}{\mathring{\gamma}}\begin{bmatrix} 0 & 0 & 0 \\ 0 & ({\mathcal L}^i)^{ L_{t}^i}_{\Phi_t^i}(\nabla_{{\mathcal L}}^i)^{\Phi_t^i}_{L_{t}^i} & 0 \\ 0 & 0 & 0\end{bmatrix}+ \sum_{t=1}^{s^i}{\mathring{\gamma}}\begin{bmatrix} 0 & 0 \\ 0 &({\mathcal L}^i\nabla_{{\mathcal L}}^i)_{K_{t}^i\setminus\Phi_t^i}^{K_{t}^i\setminus\Phi_t^i}\end{bmatrix}, \end{equation} similarly to \eqref{ganayy}.
Note first that the diagonal block in the first term in \eqref{gaxnax} corresponds to the nontrivial row $X$-run $\Delta(\beta_t^i)$, unless $t=1$ and the first $X$-block is dummy, or $t=s^i$ and $\Phi_{s^i}=\varnothing$. Hence, ${\mathring{\gamma}}$ moves it to the diagonal block corresponding to the nontrivial row $Y$-run $\bar\Delta(\bar\beta_t^i)$ occupied by $({\mathcal L}^i)^{\bar L_t^i}_{\Phi_t^i}(\nabla_{{\mathcal L}}^i)^{\Phi_t^i}_{\bar L_t^i}$ in \eqref{ynay}. Consequently, the resulting diagonal block in $\mathring{\eta}_R^i$ is equal to \begin{equation}\label{compli2} ({\mathcal L}^i)^{\bar L_t^i}_{\Phi_t^i}(\nabla_{{\mathcal L}}^i)^{\Phi_t^i}_{\bar L_t^i}+ ({\mathcal L}^i)^{L_{t}^i}_{\Phi_t^i}(\nabla_{{\mathcal L}}^i)^{\Phi_t^i}_{L_{t}^i}= ({\mathcal L}^i\nabla_{{\mathcal L}}^i)_{\Phi_t^i}^{\Phi_t^i} \end{equation} (if the first $X$-block is dummy and $\Phi_1^i\ne\varnothing$, the second term in the left hand side vanishes; for $\Phi_{t}^i=\varnothing$ relation \eqref{compli2} holds trivially with all three terms void).
Moreover, the projection $\mathring{\Pi}_{\Gamma_2}$ of the second block in the first column of \eqref{ynay} vanishes. Summing up and applying \eqref{lnal}, we get \begin{equation}\label{leftt2} \mathring{\Pi}_{\Gamma_2}(\mathring{\eta}_R^1)_\ge= \sum_{u=1}^{s^1} \mathring{\Pi}_{\Gamma_2}\begin{bmatrix} ({\mathcal L}^1\nabla_{{\mathcal L}}^1)^{\bar K_u^1}_{\bar K_u^1} & 0\\ 0 & 0 \end{bmatrix}+ \sum_{u=1}^{s^1} {\mathring{\gamma}}\begin{bmatrix} 0 & 0\\ 0 & ({\mathcal L}^1\nabla_{{\mathcal L}}^1)_{K_{u}^1\setminus\Phi_u^1}^{K_{u}^1\setminus\Phi_u^1}\end{bmatrix}. \end{equation}
Recall that $\hat l^1\in K_p\cup \bar K_{p-1}$, see Section \ref{etaretasec}. Therefore, for any $u>p$ both terms in \eqref{leftt2} vanish. Therefore, the contribution of the second term in \eqref{gaxnax} to the final result equals \begin{multline*} \sum_{u=1}^{p} \sum_{t=1}^{s^2} \left\langle({\mathcal L}^1\nabla_{{\mathcal L}}^1)_{\bar K_u^1\to \bar I_u^1}^{\bar K_u^1\to \bar I_u^1}, {\mathring{\gamma}}({\mathcal L}^2\nabla_{{\mathcal L}}^2)_{K_{t}^2\setminus\Phi_{t}^2\to I_{t}^2\setminus\Delta(\alpha_{t}^2)} ^{K_{t}^2\setminus\Phi_{t}^2\to I_{t}^2\setminus\Delta(\alpha_{t}^2)} \right\rangle\\ +\sum_{u=1}^{p} \sum_{t=1}^{s^2} \left\langle ({\mathcal L}^1\nabla_{{\mathcal L}}^1)_{K_{u}^1\setminus\Phi_{u}^1\to I_{u}^1\setminus\Delta(\alpha_{u}^1)} ^{K_{u}^1\setminus\Phi_{u}^1\to I_{u}^1\setminus\Delta(\alpha_{u}^1)}, \mathring{\Pi}_{\Gamma_1}({\mathcal L}^2\nabla_{{\mathcal L}}^2)_{K_{t}^2\setminus\Phi_{t}^2\to I_{t}^2\setminus\Delta(\alpha_{t}^2)} ^{K_{t}^2\setminus\Phi_{t}^2\to I_{t}^2\setminus\Delta(\alpha_{t}^2)} \right\rangle, \end{multline*} which yields the fourth and the fifth sums in \eqref{xinax}. Note that each summand in both sums is constant by \eqref{lnal}.
For any $u<p-1$, the nonzero blocks in both terms in \eqref{leftt2} are just identity matrices by \eqref{lnal}. Therefore, the corresponding contribution of the first term of \eqref{gaxnax} for the second function to the final result equals \begin{equation*}
\sum_{t=1}^{s^2}\left(|\{u<p-1: \bar\alpha_u^1\ge \bar\alpha_t^2\}|+ |\{u< p-1: \alpha_u^1< \alpha_t^2\}| \right) \left\langle{(\L^2)}^{L_{t}^2}_{\Phi_t^2}{\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{L_{t}^2}\right\rangle, \end{equation*} which is similar to \eqref{trace} and is constant for the same reason.
Further, let $u=p-1$. Then the nonzero block in the second term in\eqref{leftt2} is again an identity matrix, and hence the inequality $u<p-1$ in the second term above is replaced by $u<p$, which yields the last sum in \eqref{xinax}.
Let us find the contribution of the first term in \eqref{leftt2}. From now on we are looking at the summation index $t$ in \eqref{gaxnax} for the second function; recall that it corresponds to the $t$-th $Y$-block. If $\bar\alpha_{p-1}^1<\bar\alpha_t^2$ then the contribution of this summand vanishes for the size considerations, similarly to the proof of Lemma \ref{xinaylemma}.
If $\bar\alpha_{p-1}^1>\bar\alpha_t^2$, then the contribution in question equals \begin{equation*}\label{qcontr} \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\sigma(\Phi_t^2)}^{\sigma(\Phi_t^2)} {(\L^2)}_{\Phi_t^2}^{L_{t}^2}{\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{L_{t}^2}\right\rangle, \end{equation*} which coincides with $\bar B^{\mbox{\tiny\rm II}}_t$ given by \eqref{bad4}. If $\bar\alpha_{p-1}^1=\bar\alpha_t^2$ then the contribution in question remains the same as in the previous case with $\sigma(\Phi_t^2)$ replaced by $\Phi_{p-1}^1$. Consequently, we get the first sum in \eqref{xinax}.
Finally, let $u=p$. Then the first term in \eqref{leftt2} is treated exactly as in the case $u=p-1$, which gives the second sum in \eqref{xinax}.
Let us find the contribution of the second term in \eqref{leftt2}. Note that ${\mathring{\gamma}}$ enters both the second term in \eqref{leftt2} and the first term in \eqref{gaxnax}, consequently, we can drop it in the former and replace by $\mathring{\Pi}_{\Gamma_1}$ in the latter, which effectively means that ${\mathring{\gamma}}$ is simultaneously dropped in both terms.
From now on we are looking at the summation index $t$ in \eqref{gaxnax} for the second function. However, since we have dropped ${\mathring{\gamma}}$, this means that we are comparing the $t$-th $X$-block in ${\mathcal L}^2$ with the $p$-th $X$-block in ${\mathcal L}^1$. If $\alpha_p^1\geq \alpha_t^2$ then the contribution of the $t$-th term in \eqref{gaxnax} vanishes for the size considerations.
If $\alpha_p^1<\alpha_t^2$ then the contribution in question equals \begin{equation*} \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi_t^2)}^{\rho(\Phi_t^2)} {(\L^2)}_{\Phi_t^2}^{L_{t}^2}{\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{L_{t}^2}\right\rangle, \end{equation*} which coincides with the expression \eqref{bad6} for $B^{\mbox{\tiny\rm IV}}_t$ and yields the third sum in \eqref{xinax}. \end{proof}
\subsection{Proof of Theorem \ref{logcanbasis}: final steps}
Let us find the total contribution of all $B$-terms in the right hand side of \eqref{etaleta}, \eqref{etareta}, \eqref{xinay} and \eqref{xinax}. Recall that $\hat l^1$ lies in rows $K^1_p\cup \bar K^1_{p-1}$ and columns $L^1_p\cup \bar L^1_{p-1}$. We consider the following two cases.
\subsubsection{Case 1: $\hat l^1$ lies in rows $K^1_p$ and columns $L^1_p$}\label{case1} Note that under these conditions, the matrix ${\left(\nabla_{\L}^1\L^1\right)}_{\sigma(\Psi_{t+1}^2)}^{\sigma(\Psi_{t+1}^2)}$ in the expression \eqref{bad3} for $\bar B^{\mbox{\tiny\rm I}}_t$ in \eqref{etareta} vanishes, since rows and columns $\sigma(\Psi_{t+1}^2)$ lie strictly above and to the left of $\hat l^1$. Besides, the matrix ${\left(\L^1\nabla_{\L}^1\right)}_{\bar K_p^1\setminus \Phi_p^1}^{\Phi_p^1}$ in the expression \eqref{bad3eq} for $\bar B^{\mbox{\tiny\rm III}}_t$ in \eqref{etareta} vanishes as well. Indeed, the column ${(\L^1)}_{\bar K_p^1\setminus \Phi_p^1}^j$ vanishes if $j$ lies to the right of $\bar L_p$. On the other hand, the $i$-th row of $\nabla_{\mathcal L}^1$ vanishes if $i$ lies above the intersection of the main diagonal with the vertical line corresponding to the right endpoint of $\bar L_p$.
Finally, for any $t$ such that $\beta_p^1>\beta_t^2$, the contributions of the term $B^{\mbox{\tiny\rm II}}_t$ given by \eqref{bad2} in \eqref{etaleta} and \eqref{xinay} cancel each other. Similarly, for any $t$ such that $\bar\alpha_p^1>\bar\alpha_t^2$, the contributions of the term $\bar B^{\mbox{\tiny\rm II}}_t$ given by \eqref{bad4} in \eqref{etareta} and \eqref{xinax} cancel each other as well. Taking into account that $\bar\alpha_p^1=\bar\alpha_t^2$ is equivalent to $\alpha_p^1=\alpha_t^2$, we can rewrite the remaining terms as \begin{multline}\label{zone1} \sum \{B^{\mbox{\tiny\rm IV}}_t-B^{\mbox{\tiny\rm I}}_t :\ \beta_p^1>\beta_t^2, \alpha_p^1<\alpha_t^2\} +\sum \{\bar B^{\mbox{\tiny\rm II}}_t-B^{\mbox{\tiny\rm I}}_t :\ \beta_p^1>\beta_t^2, \alpha_p^1=\alpha_t^2\}\\ +\sum \{\bar B^{\mbox{\tiny\rm II}}_t :\ \beta_p^1<\beta_t^2, \alpha_p^1=\alpha_t^2\} +\sum \{B^{\mbox{\tiny\rm II}}_t-B^{\mbox{\tiny\rm III}}_t :\ \beta_p^1=\beta_t^2, \alpha_p^1>\alpha_t^2\}\\ +\sum \{B^{\mbox{\tiny\rm II}}_t-B^{\mbox{\tiny\rm III}}_t+B^{\mbox{\tiny\rm IV}}_t :\ \beta_p^1=\beta_t^2, \alpha_p^1<\alpha_t^2\} +\sum \{B^{\mbox{\tiny\rm II}}_t-B^{\mbox{\tiny\rm III}}_t+\bar B^{\mbox{\tiny\rm II}}_t :\ \beta_p^1=\beta_t^2, \alpha_p^1=\alpha_t^2\}\\ +\sum \{\bar B^{\mbox{\tiny\rm IV}}_t :\ \bar\beta_{p-1}^1<\bar\beta_t^2\} +\sum \{\bar B^{\mbox{\tiny\rm II}}_t :\ \bar\alpha_{p-1}^1\ge\bar\alpha_t^2\}, \end{multline} where $B^{\mbox{\tiny\rm I}}_t$, $B^{\mbox{\tiny\rm III}}_t$, $B^{\mbox{\tiny\rm IV}}_t$, and $\bar B^{\mbox{\tiny\rm IV}}_t$ are given by \eqref{bad1}, \eqref{bad1eq}, \eqref{bad6}, and \eqref{bad5}, respectively.
\begin{lemma}\label{zone1lemma} {\rm (i)} Expression \eqref{zone1} is given by \begin{multline*} \sum_{{{\beta_t^2<\beta_p^1}\atop{\alpha_t^2>\alpha_p^1}}} \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi_t^2)}^{\rho(\Phi_t^2)}{\left(\L^2\nabla_{\L}^2\right)}_{\Phi_t^2}^{\Phi_t^2}\right\rangle +\sum_{{{\beta_t^2\ne\beta_p^1}\atop{\alpha_t^2=\alpha_p^1}}} \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Phi_p^1}{\left(\L^2\nabla_{\L}^2\right)}_{\Phi_t^2}^{\Phi_t^2}\right\rangle\\ +\sum_{{\beta_t^2=\beta_p^1}\atop{\alpha_t^2<\alpha_p^1}} \left\langle {(\L^2)}_{\bar K_{t-1}^2}^{L_t^2}{\left(\nabla_{\L}^2\right)}_{L_t^2}^{\bar K_{t-1}^2}\right\rangle +\sum_{{\beta_t^2=\beta_p^1}\atop{\alpha_t^2\ge\alpha_p^1}} \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{L_p^1}^{L_p^1}{\left(\nabla_{\L}^2\L^2\right)}_{L_t^2}^{L_t^2}\right\rangle\\ -\sum_{{\beta_t^2=\beta_p^1}\atop{\alpha_t^2\ge \alpha_p^1}} \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2\setminus\Phi_t^2)}^{\rho(K_t^2\setminus\Phi_t^2)} {\left(\L^2\nabla_{\L}^2\right)}_{K_t^2\setminus\Phi_t^2}^{K_t^2\setminus\Phi_t^2}\right\rangle +{\sum_{{\beta_t^2=\beta_p^1}\atop{\alpha_t^2=\alpha_p^1}}}^{\rm a} \left\langle {(\L^2)}_{\bar K_{t-1}^2}^{\Psi_t^2}{\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{\bar K_{t-1}^2}\right\rangle\\ +{\sum_{{\beta_t^2=\beta_p^1}\atop{\alpha_t^2=\alpha_p^1}}}^{\rm a} \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{K_p^1}^{K_p^1}{\left(\L^2\nabla_{\L}^2\right)}_{K_t^2}^{K_t^2}\right\rangle -{\sum_{{{\beta_t^2=\beta_p^1}\atop{\alpha_t^2=\alpha_p^1}}}}^{\rm a} \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{L_p^1}^{L_p^1}{\left(\nabla_{\L}^2\L^2\right)}_{L_t^2}^{L_t^2}\right\rangle\\ +\sum_{\bar\beta_t^2>\bar\beta_{p-1}^1}\left\langle{(\L^2)}_{\bar K_t^2}^{\Psi_{t+1}^2}{\left(\nabla_{\L}^2\right)}_{\Psi_{t+1}^2}^{\bar K_t^2}\right\rangle +\sum_{\bar\alpha_t^2\le \bar\alpha_{p-1}^1}\left\langle{(\L^2)}_{\Phi_t^2}^{L_t^2}{\left(\nabla_{\L}^2\right)}_{L_t^2}^{\Phi_t^2}\right\rangle, \end{multline*} where $\sum^{\rm a}$ is taken over the cases when the exit point of $X_{I_t^2}^{J_t^2}$ lies above the exit point of $X_{I_p^1}^{J_p^1}$.
{\rm (ii)} Each summand in the expression above is a constant. \end{lemma}
\begin{proof} To find the first term in \eqref{zone1} note that for any fixed $t$ satisfying the corresponding conditions one has \begin{multline}\label{motive} B^{\mbox{\tiny\rm IV}}_t-B^{\mbox{\tiny\rm I}}_t= \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi_t^2)}^{\rho(\Phi_t^2)} {(\L^2)}_{\Phi_t^2}^{L_{t}^2}{\left(\nabla_{\L}^2\right)}_{L_t^2}^{\Phi_t^2}\right\rangle+ \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi_t^2)}^{\rho(\Phi_t^2)} {(\L^2)}_{\Phi_t^2}^{\bar L_{t}^2}{\left(\nabla_{\L}^2\right)}_{\bar L_{t}^2}^{\Phi_t^2}\right\rangle\\ =\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi_t^2)}^{\rho(\Phi_t^2)} {\left(\L^2\nabla_{\L}^2\right)}_{\Phi_t^2}^{\Phi_t^2}\right\rangle=\text{const} \end{multline} via \eqref{compli2} and \eqref{lnal}, which yields the first term in the statement of the lemma.
Similarly, to treat the second term in \eqref{zone1} we note that under the corresponding conditions \begin{multline}\label{termtwo} \bar B^{\mbox{\tiny\rm II}}_t-B^{\mbox{\tiny\rm I}}_t= \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Phi_p^1} {(\L^2)}_{\Phi_t^2}^{L_{t}^2}{\left(\nabla_{\L}^2\right)}_{L_{t}^2}^{\Phi_t^2}\right\rangle+ \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Phi_p^1} {(\L^2)}_{\Phi_t^2}^{\bar L_{t}^2}{\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{\Phi_t^2}\right\rangle\\ =\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Phi_p^1} {\left(\L^2\nabla_{\L}^2\right)}_{\Phi_t^2}^{\Phi_t^2}\right\rangle=\text{const} \end{multline} via \eqref{compli2} and \eqref{lnal}.
To find the contribution of the third term in \eqref{zone1}, rewrite it as \begin{equation*} \left\langle {\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Phi_p^1} {\left(\L^2\nabla_{\L}^2\right)}_{\Phi_t^2}^{\Phi_t^2} \right\rangle- \left\langle {\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Phi_p^1}{(\L^2)}_{\Phi_t^2}^{\bar L_t^2} {\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{\Phi_t^2} \right\rangle \end{equation*} and note that the second term equals \begin{equation}\label{3inone} -\left\langle {(\L^1)}_{\Phi_p^1}^{L_p^1} {\left(\nabla_{\L}^1\right)}_{L_p^1}^{\Phi_p^1} {(\L^2)}_{\Phi_t^2}^{\bar L_t^2} {\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{\Phi_t^2} \right\rangle, \end{equation} since ${\left(\nabla_{\L}^1\right)}_{\bar L_p^1}^{\Phi_p^1}$ vanishes. Further, the block $X_{I_p^1}^{J_p^1}$ is contained completely inside the block $X_{I_t^2}^{J_t^2}$. We denote by $\rho$ the corresponding injection, so ${(\L^1)}_{\Phi_p^1}^{L_p^1}={(\L^2)}_{\Phi_t^2}^{\rho(L_p^1)}$. Therefore, \eqref{3inone} can be written as \[ \left\langle {\left(\nabla_{\L}^1\right)}_{L_p^1}^{\Phi_p^1} {(\L^2)}_{\Phi_t^2}^{\bar L_t^2} {\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{K_t^2\setminus\Phi_t^2} {(\L^2)}_{K_t^2\setminus\Phi_t^2}^{\rho(L_p^1)} \right\rangle, \] where we used the fact that \[ {\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{\Phi_t^2} {(\L^2)}_{\Phi_t^2}^{\rho(L_p^1)}+ {\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{K_t^2\setminus\Phi_t^2} {(\L^2)}_{K_t^2\setminus\Phi_t^2}^{\rho(L_p^1)}= {\left(\nabla_{\L}^2\L^2\right)}_{\bar L_t^2}^{\rho(L_p^1)}=0. \] Finally, ${(\L^2)}_{K_t^2\setminus\Phi_t^2}^{\rho(L_p^1)}={(\L^1)}_{K_p^1\setminus\Phi_p^1}^{L_p^1}$, and \[ {(\L^1)}_{K_p^1\setminus\Phi_p^1}^{L_p^1} {\left(\nabla_{\L}^1\right)}_{L_p^1}^{\Phi_p^1}= {\left(\L^1\nabla_{\L}^1\right)}_{K_p^1\setminus\Phi_p^1}^{\Phi_p^1}=0, \] hence \eqref{3inone} vanishes, and the contribution in question is given by the same expression as in \eqref{termtwo}, and thus yields the second term in the statement of the lemma.
To find the fourth term in \eqref{zone1} note that for any fixed $t$ satisfying the corresponding conditions we get \begin{multline}\label{bmicinit} B^{\mbox{\tiny\rm II}}_t-B^{\mbox{\tiny\rm III}}_t\\=\left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{\Psi_p^1} {\left(\nabla_{\L}^2\right)}^{\bar K_{t-1}^2}_{\Psi_t^2}{(\L^2)}^{\Psi_t^2}_{\bar K_{t-1}^2}\right\rangle- \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{L_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2\setminus\Psi_t^2}{(\L^2)}^{\Psi_t^2}_{K_{t}^2}\right\rangle. \end{multline} Applying \eqref{compli} to the first expression and using the equality \[ {\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{L_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2\setminus\Psi_t^2}+ {\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{\Psi_p^1} {\left(\nabla_{\L}^2\right)}^{K_{t}^2}_{\Psi_t^2}= {\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{L_p^1}{\left(\nabla_{\L}^2\right)}^{K_{t}^2}_{L_t^2} \] we get \begin{equation}\label{bmic} B^{\mbox{\tiny\rm II}}_t-B^{\mbox{\tiny\rm III}}_t=\left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{\Psi_p^1} {\left(\nabla_{\L}^2\L^2\right)}_{\Psi_t^2}^{\Psi_t^2}\right\rangle- \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{L_p^1} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2}{(\L^2)}^{\Psi_t^2}_{K_{t}^2}\right\rangle. \end{equation} Clearly, the first term above is a constant.
Note that $\alpha_p^1>\alpha_t^2$, and hence the block $X_{I_p^1}^{J_p^1}$ is contained completely inside the block $X_{I_t^2}^{J_t^2}$, which means, in particular, that $p>1$. Consider two sequences of blocks \begin{equation}\label{blocks} \{Y_{\bar I_{p-1}^1}^{\bar J_{p-1}^1}, X_{I_{p-1}^1}^{J_{p-1}^1}, Y_{\bar I_{p-2}^1}^{\bar J_{p-2}^1}, \dots\}\quad\text{and} \quad\{Y_{\bar I_{t-1}^2}^{\bar J_{t-1}^2}, X_{I_{t-1}^2}^{J_{t-1}^2}, Y_{\bar I_{t-2}^2}^{\bar J_{t-2}^2}, \dots\}. \end{equation}
There are four possibilities:
(i) there exists a pair of blocks $Y_{\bar I_{p-m}^1}^{\bar J_{p-m}^1}$ and $Y_{\bar I_{t-m}^2}^{\bar J_{t-m}^2}$ such that $\bar J_{p-m}^1=\bar J_{t-m}^2$, $\bar I_{p-m}^1\ne \bar I_{t-m}^2$, and the subsequences of blocks to the left of $Y_{\bar I_{p-m}^1}^{\bar J_{p-m}^1}$ and $Y_{\bar I_{t-m}^2}^{\bar J_{t-m}^2}$ coincide;
(ii) there exists a pair of blocks $X_{I_{p-m}^1}^{J_{p-m}^1}$ and $X_{I_{t-m}^2}^{J_{t-m}^2}$ such that $I_{p-m}^1=I_{t-m}^2$, $J_{p-m}^1\ne J_{t-m}^2$, and the subsequences of blocks to the left of $X_{I_{p-m}^1}^{J_{p-m}^1}$ and $X_{I_{t-m}^2}^{J_{t-m}^2}$ coincide;
(iii) the first sequence is a proper subsequence of the second one;
(iv) the second sequence is a proper subsequence of the first one, or is empty.
{\it Case\/} (i): Clearly, this can be possible only if $\bar I_{t-m}^2\subset \bar I_{p-m}^1$, see Fig.~\ref{fig:chainy} where blocks $X_{I_{k}^i}^{J_{k}^i}$ and $Y_{\bar I_{k}^i}^{\bar J_{k}^i}$ are for brevity denoted $X_k^i$ and $Y_k^i$, respectively.
\begin{figure}
\caption{Case (i)}
\label{fig:chainy}
\end{figure}
Denote \begin{equation}\label{thetaxi} \Theta_{r}^i=\bigcup_{j=1}^{m-1}(\bar K_{r-j}^i\cup K_{r-j}^i)\cup \bar K_{r-m}^i, \qquad \Xi_{r}^i=\bigcup_{j=1}^{m-1} (\bar L_{r-j}^i\cup L_{r-j}^i)\cup \bar L_{r-m}^i. \end{equation} Note that the matrix ${(\L^2)}_{\Theta_{t}^2}^{\Xi_{t}^2}$ coincides with a proper submatrix of ${(\L^1)}_{\Theta_{p}^1}^{\Xi_{p}^1}$; we denote the corresponding injection $\sigma$ (it can be considered as an analog of the injection $\sigma$ defined in Section \ref{etaretasec}). Clearly, \begin{equation}\label{interim} {\left(\nabla_{\L}^2\right)}_{L_t^2}^{K_t^2}{(\L^2)}_{K_t^2}^{\Psi_t^2}= {\left(\nabla_{\L}^2\L^2\right)}_{L_t^2}^{\Psi_t^2}- {\left(\nabla_{\L}^2\right)}_{L_t^2}^{\Theta_{t}^2}{(\L^2)}_{\Theta_{t}^2}^{\Psi_t^2}. \end{equation} The contribution of the first term in \eqref{interim} to the second term in \eqref{bmic} equals \[ -\left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{L_p^1}{\left(\nabla_{\L}^2\L^2\right)}_{L_t^2}^{\Psi_t^2}\right\rangle= -\left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{\Psi_p^1}{\left(\nabla_{\L}^2\L^2\right)}_{\Psi_t^2}^{\Psi_t^2}\right\rangle \] and cancels the contribution of the first term in \eqref{bmic} computed above.
To find the contribution of the second term in \eqref{interim} to the second term in \eqref{bmic} note that \begin{equation}\label{superdec} {\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{L_p^1}= {\left(\nabla_{\L}^1\right)}_{\Psi_p^1}^{K_p^1\cup\Theta_{p}^1} {(\L^1)}_{K_p^1\cup\Theta_{p}^1}^{L_p^1}, \end{equation} so the contribution in question equals \begin{equation}\label{crosscases} \left\langle{\left(\nabla_{\L}^2\right)}_{L_t^2}^{\Theta_t^2}{(\L^2)}_{\Theta_t^2}^{\Psi_t^2} {\left(\nabla_{\L}^1\right)}_{\Psi_p^1}^{K_p^1\cup\Theta_{p}^1}{(\L^1)}_{K_p^1\cup\Theta_{p}^1}^{L_p^1}\right\rangle. \end{equation} Taking into account that ${(\L^2)}_{\Theta_t^2}^{\Psi_t^2}={(\L^1)}_{\sigma(\Theta_t^2)}^{\Psi_p^1}$, ${(\L^2)}_{\Theta_t^2}^{\Xi_t^2\setminus\Psi_t^2}={(\L^1)}_{\sigma(\Theta_t^2)}^{\Xi_p^1\setminus\Psi_p^1}$ and that \begin{equation}\label{twoterm} {(\L^1)}_{\sigma(\Theta_t^2)}^{\Psi_p^1}{\left(\nabla_{\L}^1\right)}_{\Psi_p^1}^{K_p^1\cup\Theta_{p}^1}=
{\left(\L^1\nabla_{\L}^1\right)}_{\sigma(\Theta_t^2)}^{K_p^1\cup\Theta_{p}^1}- {(\L^1)}_{\sigma(\Theta_t^2)}^{\Xi_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^1\right)}_{\Xi_p^1\setminus\Psi_p^1}^{K_p^1\cup\Theta_p^1}, \end{equation} this contribution can be rewritten as \begin{multline*} \left\langle {\left(\nabla_{\L}^2\right)}_{L_t^2}^{\Theta_t^2} {\left(\L^1\nabla_{\L}^1\right)}_{\sigma(\Theta_t^2)}^{K_p^1\cup\Theta_{p}^1} {(\L^1)}_{K_p^1\cup\Theta_{p}^1}^{L_p^1}\right\rangle\\ -\left\langle {\left(\nabla_{\L}^2\right)}_{L_t^2}^{\Theta_t^2} {(\L^2)}_{\Theta_t^2}^{\Xi_t^2\setminus\Psi_t^2} {\left(\nabla_{\L}^1\right)}_{\Xi_p^1\setminus\Psi_p^1}^{K_p^1\cup\Theta_p^1} {(\L^1)}_{K_p^1\cup\Theta_{p}^1}^{L_p^1}\right\rangle. \end{multline*} Next, by\eqref{lnal}, \[ {\left(\nabla_{\L}^2\right)}_{L_t^2}^{\Theta_t^2} {(\L^2)}_{\Theta_t^2}^{\Xi_t^2\setminus\Psi_t^2}= {\left(\nabla_{\L}^2\L^2\right)}_{L_t^2}^{\Xi_t^2\setminus\Psi_t^2}=0, \] since the columns $L_t^2$ lie to the left of $\Xi_t^2\setminus\Psi_t^2$.
Finally, by \eqref{lnal}, \[ {\left(\L^1\nabla_{\L}^1\right)}_{\sigma(\Theta_t^2)}^{K_p^1\cup\Theta_{p}^1}= \begin{bmatrix} 0& \mathbf 1 & 0\end{bmatrix}, \] where the unit block occupies the rows and the columns $\sigma(\Theta_t^2)$. Therefore, the remaining contribution equals \[ \left\langle {\left(\nabla_{\L}^2\right)}_{L_t^2}^{\Theta_t^2} {(\L^1)}_{\sigma(\Theta_t^2)}^{L_p^1}\right\rangle= \left\langle {(\L^2)}_{\Theta_t^2}^{L_t^2}{\left(\nabla_{\L}^2\right)}_{L_t^2}^{\Theta_t^2}\right\rangle= \left\langle {(\L^2)}_{\bar K_{t-1}^2}^{L_t^2}{\left(\nabla_{\L}^2\right)}_{L_t^2}^{\bar K_{t-1}^2} \right\rangle, \] which is a constant via Lemma \ref{partrace} and yields the third term in the statement of the lemma.
{\it Case\/} (ii): Clearly, this can be possible only if $J_{p-m}^1\subset J_{t-m}^2$, see Fig.~\ref{fig:chainx} where we use the same convention as in Fig.~\ref{fig:chainy}.
\begin{figure}
\caption{Case (ii)}
\label{fig:chainx}
\end{figure}
Let $\Theta_r^i$ and $\Xi_r^i$ be defined by \eqref{thetaxi}. Note that the matrix ${(\L^1)}_{\Theta_{p}^1\cup K_{p-m}^1}^{L_{p-m}^1}$ coincides with a proper submatrix of ${(\L^2)}_{\Theta_{t}^2\cup K_{t-m}^2}^{L_{t-m}^2}$; we denote the corresponding injection $\rho$ (in a sense, it can be considered as an analog of the injection $\rho$ defined in Section \ref{etaletasec}; however, it acts in the opposite direction). Clearly, $\rho(\Theta_p^1\cup K_{p-m}^1)=\Theta_t^2\cup K_{t-m}^2$. Similarly to \eqref{twoterm}, we have \begin{multline*} {(\L^1)}_{\Theta_p^1}^{\Psi_p^1}{\left(\nabla_{\L}^1\right)}_{\Psi_p^1}^{K_p^1\cup\Theta_{p}^1}\\=
{\left(\L^1\nabla_{\L}^1\right)}_{\Theta_p^1}^{K_p^1\cup\Theta_{p}^1}- {(\L^1)}_{\Theta_p^1}^{\Xi_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^1\right)}_{\Xi_p^1\setminus\Psi_p^1}^{K_p^1\cup\Theta_p^1} -{(\L^1)}_{\Theta_p^1}^{L_{p-m}^1} {\left(\nabla_{\L}^1\right)}_{L_{p-m}^1}^{K_p^1\cup\Theta_p^1}. \end{multline*} The first two terms in the right hand side of this equation are treated exactly as in Case (i) and yield the same contribution. The third term yields \begin{equation*}
-\left\langle {\left(\nabla_{\L}^2\right)}_{L_t^2}^{\Theta_t^2} {(\L^2)}_{\Theta_t^2}^{\rho(L_{p-m}^1)} {\left(\nabla_{\L}^1\right)}_{L_{p-m}^1}^{K_p^1\cup\Theta_p^1} {(\L^1)}_{K_p^1\cup\Theta_{p}^1}^{L_p^1}\right\rangle \end{equation*} since ${(\L^1)}_{\Theta_p^1}^{L_{p-m}^1}={(\L^2)}_{\Theta_t^2}^{\rho(L_{p-m}^1)}$. To proceed further, note that \[ {\left(\nabla_{\L}^2\right)}_{L_t^2}^{\Theta_t^2} {(\L^2)}_{\Theta_t^2}^{\rho(L_{p-m}^1)}={\left(\nabla_{\L}^2\L^2\right)}_{L_t^2}^{\rho(L_{p-m}^1)}- {\left(\nabla_{\L}^2\right)}_{L_t^2}^{K_{t-m}^2\setminus\Phi_{t-m}^2} {(\L^2)}_{K_{t-m}^2\setminus\Phi_{t-m}^2}^{\rho(L_{p-m}^1)}. \] The first term on the right hand side vanishes, since $\nabla_{{\mathcal L}}{\mathcal L}$ is lower triangular, and columns $L_t^2$ lie to the left of $\rho(L_{p-m}^1)$. The second yields \begin{multline*} \left\langle {\left(\nabla_{\L}^2\right)}_{L_t^2}^{K_{t-m}^2\setminus\Phi_{t-m}^2} {(\L^1)}_{K_{p-m}^1\setminus\Phi_{p-m}^1}^{L_{p-m}^1} {\left(\nabla_{\L}^1\right)}_{L_{p-m}^1}^{K_p^1\cup\Theta_p^1} {(\L^1)}_{K_p^1\cup\Theta_{p}^1}^{L_p^1}\right\rangle\\ =\left\langle {\left(\nabla_{\L}^2\right)}_{L_t^2}^{K_{t-m}^2\setminus\Phi_{t-m}^2} {\left(\L^1\nabla_{\L}^1\right)}_{K_{p-m}^1\setminus\Phi_{p-m}^1}^{K_p^1\cup\Theta_p^1} {(\L^1)}_{K_p^1\cup\Theta_{p}^1}^{L_p^1}\right\rangle \end{multline*} via ${(\L^2)}_{K_{t-m}^2\setminus\Phi_{t-m}^2}^{\rho(L_{p-m}^1)}={(\L^1)}_{K_p^1\cup\Theta_{p}^1}^{L_{p-m}^1}$. Finally, ${\left(\L^1\nabla_{\L}^1\right)}_{K_{p-m}^1\setminus\Phi_{p-m}^1}^{K_p^1\cup\Theta_p^1}$ vanishes, since ${\mathcal L}\nabla_{{\mathcal L}}$ is upper triangular, and rows $K_{p-m}^1\setminus\Phi_{p-m}^1$ lie below $K_p^1\cup\Theta_p^1$.
{\it Case\/} (iii): This case is only possible if the last block in the first sequence is of type $Y$, see Fig.~\ref{fig:chains} on the left. Assuming that this block is $Y_{\bar I_{p-m}^1}^{\bar J_{p-m}^1}$, we proceed exactly as in Case (ii) with $L_{p-m}^1=\varnothing$ and get the same contribution.
\begin{figure}
\caption{Cases (iii) and (iv)}
\label{fig:chains}
\end{figure}
{\it Case\/} (iv): This case is only possible if the last block in the second sequence is of type $X$, see Fig.~\ref{fig:chains} on the right. Assuming that this block is $X_{I_{t-m+1}^2}^{J_{t-m+1}^2}$, we proceed exactly as in Case (i) with $\bar K_{t-m}^2=\varnothing$ and get the same contribution.
To treat the fifth sum in \eqref{zone1}, note that $\alpha_p^1<\alpha_t^2$ implies that the block $X_{I_t^2}^{J_t^2}$ is contained completely inside the block $X_{I_p^1}^{J_p^1}$. Therefore, injection $\rho$ can be defined as in Section \ref{etaletasec}; moreover, $\rho(\Psi_t^2)=\Psi_p^1$ and $\rho(L_t^2)=L_p^1$, since $\beta_p^1=\beta_t^2$. Consequently, the block $Y_{\bar I_{p-1}^1}^{\bar J_{p-1}^1}$ is contained completely inside the block $Y_{\bar I_{t-1}^2}^{\bar J_{t-1}^2}$, and injection $\sigma$ can be defined as in Section \ref{etaretasec}.
We proceed similarly to the previous case and arrive at \begin{multline}\label{bmic0} B^{\mbox{\tiny\rm II}}_t-B^{\mbox{\tiny\rm III}}_t+\bar B^{\mbox{\tiny\rm IV}}_t=\left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{\Psi_p^1} {\left(\nabla_{\L}^2\L^2\right)}_{\Psi_t^2}^{\Psi_t^2}\right\rangle\\ -\left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{L_p^1} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2}{(\L^2)}^{\Psi_t^2}_{K_{t}^2}\right\rangle+ \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi_t^2)}^{\rho(\Phi_t^2)} {(\L^2)}_{\Phi_t^2}^{L_{t}^2}{\left(\nabla_{\L}^2\right)}_{L_t^2}^{\Phi_t^2}\right\rangle. \end{multline}
Clearly, ${\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{L_p^1}={\left(\nabla_{\L}^1\right)}_{\Psi_p^1}^{K_p^1\cup\bar K_{p-1}^1} {(\L^1)}_{K_p^1\cup\bar K_{p-1}^1}^{L_p^1}$, so the second term in \eqref{bmic0} equals \begin{multline}\label{bmic1} -\left\langle{(\L^1)}_{K_p^1\cup\bar K_{p-1}^1}^{L_p^1} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_t^2}{(\L^1)}^{\Psi_p^1}_{\rho(K_{t}^2)}{\left(\nabla_{\L}^1\right)}_{\Psi_p^1}^{K_p^1\cup\bar K_{p-1}^1} \right\rangle\\ =\left\langle{(\L^1)}_{K_p^1\cup\bar K_{p-1}^1}^{L_p^1} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2}{(\L^1)}_{\rho(K_t^2)}^{L_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^1\right)}_{L_p^1\setminus\Psi_p^1}^{K_p^1\cup\bar K_{p-1}^1} \right\rangle\\ -\left\langle{(\L^1)}_{K_p^1\cup\bar K_{p-1}^1}^{L_p^1} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2}{\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{K_p^1\cup\bar K_{p-1}^1} \right\rangle. \end{multline}
The first term in \eqref{bmic1} equals \begin{multline*} \left\langle{(\L^1)}_{K_p^1\cup\bar K_{p-1}^1}^{L_p^1} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2}{(\L^2)}_{K_t^2}^{L_t^2\setminus\Psi_t^2} {\left(\nabla_{\L}^1\right)}_{L_p^1\setminus\Psi_p^1}^{K_p^1\cup\bar K_{p-1}^1} \right\rangle\\ =\left\langle{\left(\nabla_{\L}^2\L^2\right)}^{L_t^2\setminus\Psi_t^2}_{L_{t}^2} {\left(\nabla_{\L}^1\L^1\right)}_{L_p^1\setminus\Psi_p^1}^{L_p^1}\right\rangle =\left\langle{\left(\nabla_{\L}^2\L^2\right)}^{L_t^2\setminus\Psi_t^2}_{L_{t}^2\setminus\Psi_t^2} {\left(\nabla_{\L}^1\L^1\right)}_{L_p^1\setminus\Psi_p^1}^{L_p^1\setminus\Psi_p^1}\right\rangle =\text{const}, \end{multline*} which together with the contribution of the first term in \eqref{bmic0} yields the fourth term in the statement of the lemma for $\alpha_t^2>\alpha_p^1$.
By~\eqref{lnal}, the matrix ${\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{K_p^1\setminus\rho(K_t^2)}$ vanishes. Next, we use injection $\sigma$ mentioned above to write ${(\L^1)}_{\rho(K_t^2)\cup\bar K_{p-1}^1}^{L_p^1}= {(\L^2)}_{K_t^2\cup \sigma(\bar K_{p-1}^1)}^{L_t^2}$, and hence the second term in \eqref{bmic1} can be written as \begin{multline}\label{bmic2} -\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)\cup\bar K_{p-1}^1} {(\L^2)}_{K_t^2\cup \sigma(\bar K_{p-1}^1)}^{L_t^2} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2}\right\rangle\\ =-\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)\cup\bar K_{p-1}^1} {\left(\L^2\nabla_{\L}^2\right)}_{K_t^2\cup\sigma(\bar K_{p-1}^1)}^{K_t^2}\right\rangle\\ +\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)\cup\bar K_{p-1}^1} {(\L^2)}_{K_t^2\cup \sigma(\bar K_{p-1}^1)}^{\bar L_t^2} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{\bar L_{t}^2}\right\rangle\\ +\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)\cup\bar K_{p-1}^1} {(\L^2)}_{K_t^2\cup \sigma(\bar K_{p-1}^1)}^{\bar L_{t-1}^2\setminus\Psi_t^2} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{\bar L_{t-1}^2\setminus\Psi_t^2}\right\rangle. \end{multline}
By \eqref{lnal}, the first term in \eqref{bmic2} equals \begin{equation}\label{bmic21} -\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)} {\left(\L^2\nabla_{\L}^2\right)}_{K_t^2}^{K_t^2}\right\rangle=\text{const}. \end{equation}
Recall that the matrix ${(\L^2)}_{(K_t^2\setminus\Phi_t^2)\cup \sigma(\bar K_{p-1}^1)}^{\bar L_t^2}$ vanishes, and so the second term in \eqref{bmic2} can be rewritten as \[ \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(\Phi_t^2)}{(\L^2)}_{\Phi_t^2}^{\bar L_t^2} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{\bar L_{t}^2}\right\rangle= \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi_t^2)}^{\rho(\Phi_t^2)}{(\L^2)}_{\Phi_t^2}^{\bar L_t^2} {\left(\nabla_{\L}^2\right)}^{\Phi_t^2}_{\bar L_{t}^2}\right\rangle \] by \eqref{lnal}. Taking into account the third term in \eqref{bmic0}, we get exactly the same contribution as in \eqref{motive}, which together with \eqref{bmic21} yields the fifth term in the statement of the lemma for $\alpha_t^2>\alpha_p^1$.
To treat the third term in \eqref{bmic2} note that \[ {\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)\cup\bar K_{p-1}^1}= {(\L^1)}_{\rho(K_t^2)}^{L_p^1}{\left(\nabla_{\L}^1\right)}_{L_p^1}^{\rho(K_t^2)\cup\bar K_{p-1}^1} \]
and that the matrix ${(\L^2)}_{K_t^2}^{\bar L_{t-1}^2\setminus\Psi_t^2}$ vanishes. Consequently, the term in question equals \begin{multline*} \left\langle{(\L^1)}_{\rho(K_t^2)}^{L_p^1}{\left(\nabla_{\L}^1\right)}_{L_p^1}^{\rho(K_t^2)\cup\bar K_{p-1}^1} {(\L^2)}_{K_t^2\cup \sigma(\bar K_{p-1}^1)}^{\bar L_{t-1}^2\setminus\Psi_t^2} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{\bar L_{t-1}^2\setminus\Psi_t^2}\right\rangle\\ =\left\langle{(\L^1)}_{\rho(K_t^2)}^{L_p^1}{\left(\nabla_{\L}^1\right)}_{L_p^1}^{\bar K_{p-1}^1} {(\L^1)}_{\bar K_{p-1}^1}^{\bar L_{p-1}^2\setminus\Psi_p^1} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{\bar L_{t-1}^2\setminus\Psi_t^2}\right\rangle, \end{multline*} since ${(\L^2)}_{\sigma(\bar K_{p-1}^1)}^{\bar L_{t-1}^2\setminus\Psi_t^2} ={(\L^1)}_{\bar K_{p-1}^1}^{\bar L_{p-1}^2\setminus\Psi_p^1}$. The obtained expression vanishes since \[ {\left(\nabla_{\L}^1\right)}_{L_p^1}^{\bar K_{p-1}^1} {(\L^1)}_{\bar K_{p-1}^1}^{\bar L_{p-1}^2\setminus\Psi_p^1}={\left(\nabla_{\L}^1\L^1\right)}_{L_p^1}^{\bar L_{p-1}^2\setminus\Psi_p^1} \] vanishes by \eqref{lnal}.
Further, consider the sixth term in \eqref{zone1}. Using \eqref{bmic} we arrive at \begin{multline}\label{bmicmie} B^{\mbox{\tiny\rm II}}_t-B^{\mbox{\tiny\rm III}}_t+\bar B^{\mbox{\tiny\rm II}}_t=\left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{\Psi_p^1} {\left(\nabla_{\L}^2\L^2\right)}_{\Psi_t^2}^{\Psi_t^2}\right\rangle\\ -\left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{L_p^1} {\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2}{(\L^2)}^{\Psi_t^2}_{K_{t}^2}\right\rangle +\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Phi_p^1} {(\L^2)}_{\Phi_t^2}^{L_{t}^2}{\left(\nabla_{\L}^2\right)}_{L_{t}^2}^{\Phi_t^2}\right\rangle. \end{multline} Clearly, the first term in \eqref{bmicmie} is a constant.
Note that the blocks $X_{I_p^1}^{J_p^1}$ and $X_{I_t^2}^{J_t^2}$ coincide. Similarly to the analysis above, we consider two nonempty sequences of blocks \eqref{blocks} (the cases $p=1$ or $t=1$ are trivial). We have the same four possibilities as before, and, additionally,
(v) the sequences coincide.
Each one of the possibilities (i)--(iv) is further split into two:
a) the exit point of $X_{I_t^2}^{J_t^2}$ lies below the exit point of $X_{I_p^1}^{J_p^1}$;
b) the exit point of $X_{I_t^2}^{J_t^2}$ lies above the exit point of $X_{I_p^1}^{J_p^1}$.
{\it Case\/} (ia): Clearly, this can be possible only if $\bar I_{p-m}^1\subset \bar I_{t-m}^2$, see Fig.~\ref{fig:chainya}.
\begin{figure}
\caption{Case (ia)}
\label{fig:chainya}
\end{figure}
Define $\Theta_r^i$ and $\Xi_r^i$ in the same way as in \eqref{thetaxi}. Using equalities \eqref{superdec} and ${(\L^2)}_{K_t^2}^{\Psi_t^2}={(\L^1)}_{K_p^1}^{\Psi_p^1}$, we rewrite the second term in \eqref{bmicmie} as \begin{multline*} -\left\langle{\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2} {\left(\L^1\nabla_{\L}^1\right)}_{K_p^1}^{K_p^1\cup\Theta_p^1} {(\L^1)}_{K_p^1\cup\Theta_p^1}^{L_p^1}\right\rangle\\ +\left\langle{\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2} {(\L^1)}_{K_p^1}^{L_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^1\right)}_{L_p^1\setminus\Psi_p^1}^{K_p^1\cup\Theta_p^1} {(\L^1)}_{K_p^1\cup\Theta_p^1}^{L_p^1}\right\rangle\\ +\left\langle{\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2} {(\L^1)}_{K_p^1}^{\bar L_p^1} {\left(\nabla_{\L}^1\right)}_{\bar L_p^1}^{K_p^1\cup\Theta_p^1} {(\L^1)}_{K_p^1\cup\Theta_p^1}^{L_p^1}\right\rangle. \end{multline*} Note that ${(\L^1)}_{K_p^1}^{L_p^1\setminus\Psi_p^1}={(\L^2)}_{K_t^2}^{L_t^2\setminus\Psi_t^2}$ and \begin{equation*} \begin{aligned} {\left(\nabla_{\L}^1\right)}_{L_p^1\setminus\Psi_p^1}^{K_p^1\cup\Theta_p^1} {(\L^1)}_{K_p^1\cup\Theta_p^1}^{L_p^1}&= {\left(\nabla_{\L}^1\L^1\right)}_{L_p^1\setminus\Psi_p^1}^{L_p^1},\\ {\left(\nabla_{\L}^2\right)}^{K_t^2}_{L_{t}^2} {(\L^2)}_{K_t^2}^{L_t^2\setminus\Psi_t^2}&= {\left(\nabla_{\L}^2\L^2\right)}^{L_t^2\setminus\Psi_t^2}_{L_t^2}, \end{aligned} \end{equation*} hence the second term in the expression above equals \[ \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{L_p^1\setminus\Psi_p^1}^{L_p^1}{\left(\nabla_{\L}^2\L^2\right)}^{L_t^2\setminus\Psi_t^2}_{L_t^2}\right\rangle= \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{L_p^1\setminus\Psi_p^1}^{L_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^2\L^2\right)}^{L_t^2\setminus\Psi_t^2}_{L_t^2\setminus\Psi_t^2}\right\rangle=\text{const}, \] which together with the first term in \eqref{bmicmie} yields the eighth term in the statement of the lemma, as well as the fourth term for $\alpha_t^2=\alpha_p^1$.
Finally, ${\left(\nabla_{\L}^1\right)}_{\bar L_p^1}^{K_p^1\cup\Theta_p^1}$ vanishes since the columns $\bar L_p^1$ are strictly to the left of $K_p^1\cup\Theta_p^1$, so the third term in the expression above vanishes.
Note that \begin{multline*}
{\left(\L^1\nabla_{\L}^1\right)}_{K_p^1}^{K_p^1\cup\Theta_p^1} {(\L^1)}_{K_p^1\cup\Theta_p^1}^{L_p^1}\\= {\left(\L^1\nabla_{\L}^1\right)}_{K_p^1}^{\Phi_p^1} {(\L^1)}_{\Phi_p^1}^{L_p^1}+ {\left(\L^1\nabla_{\L}^1\right)}_{K_p^1}^{K_p^1\setminus\Phi_p^1} {(\L^1)}_{K_p^1\setminus\Phi_p^1}^{L_p^1}+ {\left(\L^1\nabla_{\L}^1\right)}_{K_p^1}^{\Theta_p^1} {(\L^1)}_{\Theta_p^1}^{L_p^1}. \end{multline*} By \eqref{lnal}, ${\left(\L^1\nabla_{\L}^1\right)}_{K_p^1\setminus\Phi_p^1}^{\Phi_p^1}$ vanishes; besides, ${(\L^2)}_{\Phi_t^2}^{L_{t}^2}={(\L^1)}_{\Phi_p^1}^{L_{p}^1}$. Hence \begin{equation*} -\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{K_p^1}^{\Phi_p^1} {(\L^1)}_{\Phi_p^1}^{L_{p}^1}{\left(\nabla_{\L}^2\right)}_{L_{t}^2}^{K_t^2}\right\rangle =-\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Phi_p^1} {(\L^2)}_{\Phi_t^2}^{L_{t}^2}{\left(\nabla_{\L}^2\right)}_{L_{t}^2}^{\Phi_t^2}\right\rangle, \end{equation*} that is, the first term in the equation above cancels the third term in \eqref{bmicmie}. Further, ${(\L^1)}_{K_p^1\setminus\Phi_p^1}^{L_{p}^1}={(\L^2)}_{K_t^2\setminus\Phi_t^2}^{L_{t}^2}$ and \[ {(\L^2)}_{K_t^2\setminus\Phi_t^2}^{L_{t}^2}{\left(\nabla_{\L}^2\right)}_{L_{t}^2}^{K_t^2}= {\left(\L^2\nabla_{\L}^2\right)}_{K_t^2\setminus\Phi_t^2}^{K_t^2}, \] and hence \begin{multline}\label{foroneb} -\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{K_p^1}^{K_p^1\setminus\Phi_p^1} {(\L^1)}_{K_p^1\setminus\Phi_p^1}^{L_{p}^1}{\left(\nabla_{\L}^2\right)}_{L_{t}^2}^{K_t^2}\right\rangle =-\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{K_p^1}^{K_p^1\setminus\Phi_p^1} {\left(\L^2\nabla_{\L}^2\right)}_{K_{t}^2\setminus\Phi_t^2}^{K_t^2}\right\rangle\\ =-\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{K_p^1\setminus\Phi_p^1}^{K_p^1\setminus\Phi_p^1} {\left(\L^2\nabla_{\L}^2\right)}_{K_{t}^2\setminus\Phi_t^2}^{K_t^2\setminus\Phi_t^2}\right\rangle =\text{const}. \end{multline}
The remaining contribution of \eqref{bmicmie} equals \begin{equation}\label{fortwoa} -\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{K_p^1}^{\Theta_p^1} {(\L^1)}_{\Theta_p^1}^{L_{p}^1}{\left(\nabla_{\L}^2\right)}_{L_{t}^2}^{K_t^2}\right\rangle= -\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Theta_p^1} {(\L^1)}_{\Theta_p^1}^{\Psi_{p}^1}{\left(\nabla_{\L}^2\right)}_{\Psi_{t}^2}^{\Phi_t^2}\right\rangle, \end{equation} since the deleted columns and rows of ${\mathcal L}^1\nabla_{{\mathcal L}}^1$ and ${\mathcal L}^1$ vanish.
Next we use the injection $\sigma$ (similar to the one used in Case (i) above but acting in the opposite direction) to rewrite ${(\L^1)}_{\Theta_p^1}^{\Psi_{p}^1}={(\L^2)}_{\sigma(\Theta_p^1)}^{\Psi_t^2}$, and to write \[ {(\L^2)}_{\sigma(\Theta_p^1)}^{\Psi_t^2}{\left(\nabla_{\L}^2\right)}_{\Psi_{t}^2}^{\Phi_t^2}= {\left(\L^2\nabla_{\L}^2\right)}_{\sigma(\Theta_p^1)}^{\Phi_t^2}- {(\L^2)}_{\sigma(\Theta_p^1)}^{\Xi_t^2\setminus\Psi_t^2} {\left(\nabla_{\L}^2\right)}_{\Xi_t^2\setminus\Psi_{t}^2}^{\Phi_t^2}, \] which transforms the above contribution into \[ -\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Theta_p^1} {\left(\L^2\nabla_{\L}^2\right)}_{\sigma(\Theta_p^1)}^{\Phi_t^2}\right\rangle+ \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Theta_p^1} {(\L^2)}_{\sigma(\Theta_p^1)}^{\Xi_t^2\setminus\Psi_t^2} {\left(\nabla_{\L}^2\right)}_{\Xi_t^2\setminus\Psi_{t}^2}^{\Phi_t^2}\right\rangle. \] Clearly, the first term above vanishes since ${\left(\L^2\nabla_{\L}^2\right)}_{\sigma(\Theta_p^1)}^{\Phi_t^2}=0$. The second one vanishes since \begin{equation}\label{fortwoaa} {\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Theta_p^1}= {(\L^1)}_{\Phi_p^1}^{L_p^1\cup\bar L_p^1} {\left(\nabla_{\L}^1\right)}_{L_p^1\cup\bar L_p^1}^{\Theta_p^1}, \end{equation} ${(\L^2)}_{\sigma(\Theta_p^1)}^{\Xi_t^2\setminus\Psi_t^2}={(\L^1)}_{\Theta_p^1}^{\Xi_p^1\setminus\Psi_p^1}$ and \[ {\left(\nabla_{\L}^1\right)}_{L_p^1\cup\bar L_p^1}^{\Theta_p^1} {(\L^1)}_{\Theta_p^1}^{\Xi_p^1\setminus\Psi_p^1}= {\left(\nabla_{\L}^1\L^1\right)}_{L_p^1\cup\bar L_p^1}^{\Xi_p^1\setminus\Psi_p^1}=0. \]
{\it Case\/} (ib): Clearly, this can be possible only if $\bar I_{t-m}^2\subset \bar I_{p-m}^1$, cf.~Fig.~\ref{fig:chainy}. We proceed exactly as in Case (ia), retaining the definitions of $\Theta_r$ and $\Xi_r$, and arrive at \eqref{fortwoa}. As a result, we obtain two contributions similar to those obtained in Case (ia): one is similar to the eighth term in the statement of the lemma and is given by \begin{equation}\label{tbdes} {\sum_{{\beta_p^1=\beta_t^2}\atop{\alpha_p^1=\alpha_t^2}}}^{\rm a} \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{L_p^1}^{L_p^1}{\left(\nabla_{\L}^2\L^2\right)}_{L_t^2}^{L_t^2}\right\rangle, \end{equation} while the other together with \eqref{foroneb} yields the fifth term in the statement of the lemma for $\alpha_t^2=\alpha_p^1$.
Next, we note that ${\left(\L^1\nabla_{\L}^1\right)}_{\Phi_p^1}^{\Theta_p^1}={(\L^1)}_{\Phi_p^1}^{L_p^1}{\left(\nabla_{\L}^1\right)}_{L_p^1}^{\Theta_p^1}$, since ${\left(\nabla_{\L}^1\right)}_{\bar L_p^1}^{\Theta_p^1}=0$. Applying ${(\L^1)}_{\Phi_p^1}^{L_p^1}={(\L^2)}_{\Phi_t^2}^{L_t^2}$, we arrive at \[ -\left\langle{\left(\nabla_{\L}^1\right)}_{L_p^1}^{\Theta_p^1}{(\L^1)}_{\Theta_p^1}^{\Psi_p^1}{\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{\Phi_t^2} {(\L^2)}_{\Phi_t^2}^{L_t^2}\right\rangle. \] Note that \begin{equation}\label{kern} {\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{\Phi_t^2}{(\L^2)}_{\Phi_t^2}^{L_t^2}={\left(\nabla_{\L}^2\L^2\right)}_{\Psi_t^2}^{L_t^2}- {\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{K_t^2\setminus\Phi_t^2}{(\L^2)}_{K_t^2\setminus\Phi_t^2}^{L_t^2} -{\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{\Theta_t^2}{(\L^2)}_{\Theta_t^2}^{L_t^2}. \end{equation}
To treat the first term in \eqref{kern}, we use an analog of \eqref{compli} and get \[ -\left\langle {\left(\nabla_{\L}^1\L^1\right)}_{L_p^1}^{\Psi_p^1} {\left(\nabla_{\L}^2\L^2\right)}_{\Psi_t^2}^{L_t^2} \right\rangle+ \left\langle {\left(\nabla_{\L}^1\right)}_{L_p^1}^{K_p^1}{(\L^1)}_{K_p^1}^{\Psi_p^1} {\left(\nabla_{\L}^2\L^2\right)}_{\Psi_t^2}^{L_t^2}\right\rangle. \] Clearly, the first term above equals \begin{equation}\label{tbdes1} -\left\langle {\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{\Psi_p^1} {\left(\nabla_{\L}^2\L^2\right)}_{\Psi_t^2}^{\Psi_t^2} \right\rangle=\text{const}. \end{equation}
The second term above can be rewritten as \[ \left\langle {\left(\nabla_{\L}^1\right)}_{L_p^1}^{K_p^1}{(\L^2)}_{K_t^2}^{\Psi_t^2}{\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{K_t^2\cup\Theta_t^2} {(\L^2)}_{K_t^2\cup\Theta_t^2}^{L_t^2}\right\rangle. \] Next, we write \begin{equation}\label{kern1} {(\L^2)}_{K_t^2}^{\Psi_t^2}{\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{K_t^2\cup\Theta_t^2}= {\left(\L^2\nabla_{\L}^2\right)}_{K_t^2}^{K_t^2\cup\Theta_t^2}-{(\L^2)}_{K_t^2}^{L_t^2\setminus\Psi_t^2} {\left(\nabla_{\L}^2\right)}_{L_t^2\setminus\Psi_t^2}^{K_t^2\cup\Theta_t^2}- {(\L^2)}_{K_t^2}^{\bar L_t^2}{\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{K_t^2\cup\Theta_t^2}. \end{equation} The contribution of the first term in \eqref{kern1} can be written as \begin{multline*} \left\langle {\left(\L^2\nabla_{\L}^2\right)}_{K_t^2}^{K_t^2\cup\Theta_t^2}{(\L^1)}_{K_p^1\cup\sigma(\Theta_t^2)}^{L_p^1} {\left(\nabla_{\L}^1\right)}_{L_p^1}^{K_p^1}\right\rangle\\ =-\left\langle {\left(\L^2\nabla_{\L}^2\right)}_{K_t^2}^{K_t^2\cup\Theta_t^2} {(\L^1)}_{K_p^1\cup\sigma(\Theta_t^2)}^{\Xi_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^1\right)}_{\Xi_p^1\setminus\Psi_p^1}^{K_p^1}\right\rangle\\ +\left\langle {\left(\L^2\nabla_{\L}^2\right)}_{K_t^2}^{K_t^2\cup\Theta_t^2} {\left(\L^1\nabla_{\L}^1\right)}_{K_p^1\cup\sigma(\Theta_t^2)}^{K_p^1}\right\rangle, \end{multline*} where injection $\sigma$ is defined as in Case (i) above. The second term above equals \[
\left\langle {\left(\L^2\nabla_{\L}^2\right)}_{K_t^2}^{K_t^2}{\left(\L^1\nabla_{\L}^1\right)}_{K_p^1}^{K_p^1}\right\rangle=\text{const}, \] and yields the seventh term in the statement of the lemma, while the first term equals \[
-\left\langle {(\L^2)}_{K_t^2}^{L_t^2\cup \bar L_t^2}{\left(\nabla_{\L}^2\right)}_{L_t^2\cup \bar L_t^2}^{K_t^2\cup\Theta_t^2} {(\L^2)}_{K_t^2\cup\Theta_t^2}^{\Xi_t^2\setminus\Psi_t^2} {\left(\nabla_{\L}^1\right)}_{\Xi_p^1\setminus\Psi_p^1}^{K_p^1}\right\rangle \] and vanishes, since \[ {\left(\nabla_{\L}^2\right)}_{L_t^2\cup \bar L_t^2}^{K_t^2\cup\Theta_t^2} {(\L^2)}_{K_t^2\cup\Theta_t^2}^{\Xi_t^2\setminus\Psi_t^2}= {\left(\nabla_{\L}^2\L^2\right)}_{L_t^2\cup \bar L_t^2}^{\Xi_t^2\setminus\Psi_t^2}=0 \] by \eqref{lnal}.
The contribution of the second term in \eqref{kern1} equals \begin{multline*} -\left\langle {\left(\nabla_{\L}^1\right)}_{L_p^1}^{K_p^1}{(\L^1)}_{K_p^1}^{L_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^2\right)}_{L_t^2\setminus\Psi_t^2}^{K_t^2\cup\Theta_t^2} {(\L^2)}_{K_t^2\cup\Theta_t^2}^{L_t^2}\right\rangle\\ =-\left\langle {\left(\nabla_{\L}^1\L^1\right)}_{L_p^1}^{L_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^2\L^2\right)}_{L_t^2\setminus\Psi_t^2}^{L_t^2}\right\rangle =-\left\langle {\left(\nabla_{\L}^1\L^1\right)}_{L_p^1\setminus\Psi_p^1}^{L_p^1\setminus\Psi_p^1} {\left(\nabla_{\L}^2\L^2\right)}_{L_t^2\setminus\Psi_t^2}^{L_t^2\setminus\Psi_t^2}\right\rangle =\text{const} \end{multline*} and together with \eqref{tbdes1} cancels the contribution of \eqref{tbdes}.
The contribution of the third term in \eqref{kern1} equals \[ -\left\langle {\left(\nabla_{\L}^1\right)}_{L_p^1}^{K_p^1}{(\L^2)}_{K_t^2}^{\bar L_t^2} {\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{K_t^2\cup\Theta_t^2}{(\L^2)}_{K_t^2\cup\Theta_t^2}^{L_t^2}\right\rangle \] and vanishes, since \[ {\left(\nabla_{\L}^2\right)}_{\bar L_t^2}^{K_t^2\cup\Theta_t^2}{(\L^2)}_{K_t^2\cup\Theta_t^2}^{L_t^2}= {\left(\nabla_{\L}^2\L^2\right)}_{\bar L_t^2}^{L_t^2}=0 \] by \eqref{lnal}.
The contribution of the second term in \eqref{kern} equals \[ \left\langle {(\L^1)}_{K_p^1\setminus\Phi_p^1}^{L_p^1} {\left(\nabla_{\L}^1\right)}_{L_p^1}^{\Theta_p^1}{(\L^1)}_{\Theta_p^1}^{\Psi_p^1} {\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{K_t^2\setminus\Phi_t^2}\right\rangle \] and vanishes, since \[ {(\L^1)}_{K_p^1\setminus\Phi_p^1}^{L_p^1} {\left(\nabla_{\L}^1\right)}_{L_p^1}^{\Theta_p^1}={\left(\L^1\nabla_{\L}^1\right)}_{K_p^1\setminus\Phi_p^1}^{\Theta_p^1}=0; \] the latter equality follows from the fact ${\left(\L^1\nabla_{\L}^1\right)}_{(K_p^1\setminus\Phi_p^1)\cup\Theta_p^1}^{(K_p^1\setminus\Phi_p^1)\cup\Theta_p^1}=\mathbf 1$.
The contribution of the third term in \eqref{kern} equals \[ \left\langle {(\L^1)}_{\sigma(\Theta_t^2)}^{L_r^1}{\left(\nabla_{\L}^1\right)}_{L_p^1}^{\Theta_p^1} {(\L^1)}_{\Theta_p^1}^{\Psi_p^1}{\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{\Theta_t^2}\right\rangle \] via ${(\L^2)}_{\Theta_t^2}^{L_t^2}={(\L^1)}_{\sigma(\Theta_t^2)}^{L_r^1}$. Note that \[ {(\L^1)}_{\sigma(\Theta_t^2)}^{L_p^1}{\left(\nabla_{\L}^1\right)}_{L_p^1}^{\Theta_p^1}= {\left(\L^1\nabla_{\L}^1\right)}_{\sigma(\Theta_t^2)}^{\Theta_p^1}= \begin{bmatrix} \mathbf 1 & 0 \end{bmatrix}, \] and hence ${(\L^1)}_{\sigma(\Theta_t^2)}^{L_p^1}{\left(\nabla_{\L}^1\right)}_{L_p^1}^{\Theta_p^1} {(\L^1)}_{\Theta_p^1}^{\Psi_p^1}={(\L^2)}_{\Theta_t^2}^{\Psi_t^2}$. Consequently, the contribution in question equals \[ -\left\langle {(\L^2)}_{\Theta_t^2}^{\Psi_t^2}{\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{\Theta_t^2}\right\rangle= -\left\langle {(\L^2)}_{\bar K_{t-1}^2}^{\Psi_t^2}{\left(\nabla_{\L}^2\right)}_{\Psi_t^2}^{\bar K_{t-1}^2}\right\rangle, \] which is a constant by Lemma \ref{partrace} yielding the sixth term in the statement of the lemma.
{\it Case\/} (iia): Clearly, this can be possible only if $J_{t-m}^2\subset J_{p-m}^1$, see Fig.~\ref{fig:chainxa}.
\begin{figure}
\caption{Case (iia)}
\label{fig:chainxa}
\end{figure}
We proceed exactly as in Case (ia), retaining the definitions of $\Theta_r^i$ and $\Xi_r^i$, and arrive at \eqref{fortwoa}. Next, we apply ${(\L^1)}_{\Theta_p^1}^{\Psi_{p}^1}={(\L^2)}_{\Theta_t^2}^{\Psi_t^2}$, and note that \begin{equation*} {(\L^2)}_{\Theta_t^2}^{\Psi_t^2}{\left(\nabla_{\L}^2\right)}_{\Psi_{t}^2}^{\Phi_t^2}= {\left(\L^2\nabla_{\L}^2\right)}_{\Theta_t^2}^{\Phi_t^2}- {(\L^2)}_{\Theta_t^2}^{\Xi_t^2\setminus\Psi_t^2} {\left(\nabla_{\L}^2\right)}_{\Xi_t^2\setminus\Psi_{t}^2}^{\Phi_t^2}- {(\L^2)}_{\Theta_t^2}^{L_{t-m}^2} {\left(\nabla_{\L}^2\right)}_{L_{t-m}^2}^{\Phi_t^2}. \end{equation*} Consequently, \eqref{fortwoa} can be written as a sum of three terms. The first two are treated exactly as in Case (ia) and yield the same contribution. With the help of \eqref{fortwoaa}, the third term can be rewritten as \[ \left\langle{(\L^1)}_{\Phi_p^1}^{L_p^1\cup \bar L_p^1} {\left(\nabla_{\L}^1\right)}_{L_p^1\cup \bar L_p^1}^{\Theta_p^1} {(\L^2)}_{\Theta_t^2}^{L_{t-m}^2}{\left(\nabla_{\L}^2\right)}_{L_{t-m}^2}^{\Phi_t^2}\right\rangle. \] Next, we use the injection $\rho$ (similar to the one defined in Section~\ref{etaletasec}) to write ${(\L^2)}_{\Theta_t^2}^{L_{t-m}^2}= {(\L^1)}_{\Theta_p^1}^{\rho(L_{t-m}^2)}$, which together with \[ {\left(\nabla_{\L}^1\right)}_{L_p^1\cup \bar L_p^1}^{\Theta_p^1}{(\L^1)}_{\Theta_p^1}^{\rho(L_{t-m}^2)}+ {\left(\nabla_{\L}^1\right)}_{L_p^1\cup \bar L_p^1}^{K_{p-m}^1\setminus\Phi_{p-m}^1} {(\L^1)}_{K_{p-m}^1\setminus\Phi_{p-m}^1}^{\rho(L_{t-m}^2)}= {\left(\nabla_{\L}^1\L^1\right)}_{L_p^1\cup \bar L_p^1}^{\rho(L_{t-m}^2)}=0 \] transforms the third term into \[ -\left\langle{(\L^1)}_{\Phi_p^1}^{L_p^1\cup \bar L_p^1} {\left(\nabla_{\L}^1\right)}_{L_p^1\cup \bar L_p^1}^{K_{p-m}^1\setminus\Phi_{p-m}^1} {(\L^1)}_{K_{p-m}^1\setminus\Phi_{p-m}^1}^{\rho(L_{t-m}^2)}{\left(\nabla_{\L}^2\right)}_{L_{t-m}^2}^{\Phi_t^2}\right\rangle. \] Finally, we use ${(\L^1)}_{K_{p-m}^1\setminus\Phi_{p-m}^1}^{\rho(L_{t-m}^2)}= {(\L^2)}_{K_{t-m}^2\setminus\Phi_{t-m}^2}^{L_{t-m}^2}$ and \[ {(\L^2)}_{K_{t-m}^2\setminus\Phi_{t-m}^2}^{L_{t-m}^2}{\left(\nabla_{\L}^2\right)}_{L_{t-m}^2}^{\Phi_t^2}= {\left(\L^2\nabla_{\L}^2\right)}_{K_{t-m}^2\setminus\Phi_{t-m}^2}^{\Phi_t^2}=0 \] to make sure that the contribution of this term vanishes.
{\it Case\/} (iib): Clearly, this can be possible only if $J_{p-m}^1\subset J_{t-m}^2$, cf.~Fig.~\ref{fig:chainx}. We proceed exactly as in Case (ib), with the only difference: the contribution of the first term in \eqref{kern1} contains an additional term \[
\left\langle {(\L^2)}_{K_t^2}^{L_t^2\cup\bar L_t^2}{\left(\nabla_{\L}^2\right)}_{L_t^2\cup\bar L_t^2}^{K_t^2\cup\Theta_t^2} {(\L^1)}_{K_p^1\cup\Theta_p^1}^{L_{p-m}^1}{\left(\nabla_{\L}^1\right)}_{L_{p-m}^1}^{K_p^1}\right\rangle, \] which vanishes since ${(\L^1)}_{K_p^1\cup\Theta_p^1}^{L_{p-m}^1}={(\L^2)}_{K_t^2\cup\Theta_t^2}^{\rho(L_{p-m}^1)}$ and \[ {\left(\nabla_{\L}^2\right)}_{L_t^2\cup\bar L_t^2}^{K_t^2\cup\Theta_t^2}{(\L^2)}_{K_t^2\cup\Theta_t^2}^{\rho(L_{p-m}^1)}= {\left(\nabla_{\L}^2\L^2\right)}_{L_t^2\cup\bar L_t^2}^{\rho(L_{p-m}^1)}=0. \]
{\it Case\/} (iiia): This case is only possible if the last block in the first sequence is of type $X$, see Fig.~\ref{fig:chainsa} on the right. Assuming that this block is $X_{I_{p-m+1}^1}^{J_{p-m+1}^1}$, we proceed exactly as in Case (ia) with $\bar K_{p-m}^1=\varnothing$ and get the same contribution.
\begin{figure}
\caption{Cases (iiia) and (iva)}
\label{fig:chainsa}
\end{figure}
{\it Case\/} (iiib): This case is only possible if the last block in the first sequence is of type $Y$, cf.~Fig.~\ref{fig:chains}. Assuming that this block is $Y_{\bar I_{p-m}^1}^{\bar J_{p-m}^1}$, we proceed exactly as in Case (iib) with $L_{p-m}^1=\varnothing$ and get the same contribution.
{\it Case\/} (iva): This case is only possible if the last block in the second sequence is of type $Y$, see Fig.~\ref{fig:chainsa} on the left. Assuming that this block is $Y_{\bar I_{t-m}^2}^{\bar J_{t-m}^2}$, we proceed exactly as in Case (iia) with $L_{t-m}^2=\varnothing$ and get the same contribution.
{\it Case\/} (ivb): This case is only possible if the last block in the second sequence is of type $X$, cf.~Fig.~\ref{fig:chains}. Assuming that this block is $X_{I_{t-m+1}^2}^{J_{t-m+1}^2}$, we proceed exactly as in Case (ib) with $\bar K_{t-m}^2=\varnothing$ and get the same contribution.
{\it Case\/} (v): This case is only possible if the exit points of $X_{I_t^2}^{J_t^2}$ and $X_{I_p^1}^{J_p^1}$ coincide. The last block in both sequences is either of type $Y$ or of type $X$. In the former case we proceed as in Case (iva), and in the latter case, as in Case (iiia).
The last two terms in the statement of the lemma are obtained from the last two terms in \eqref{zone1} by taking into account that ${\left(\L^1\nabla_{\L}^1\right)}_{\sigma(\Phi_t^2)}^{\sigma(\Phi_t^2)}$ in the expression \eqref{bad4} for $\bar B^{\mbox{\tiny\rm II}}_t$ and ${\left(\nabla_{\L}^1\L^1\right)}_{\sigma(\Psi_{t+1}^2)}^{\sigma(\Psi_{t+1}^2)}$ in the expression \eqref{bad5} for $\bar B^{\mbox{\tiny\rm IV}}_t$ are unit matrices, since in both cases $\sigma$ is an injection into the block $Y_{I_{p-1}^1}^{J_{p-1}^1}$. The remaining traces are treated in the same way as in \eqref{trace}. \end{proof}
\subsubsection{Case 2: $\hat l^1$ lies in rows $\bar K^1_{p-1}$ and columns $\bar L^1_{p-1}$}\label{case2} Similarly to the previous case, ${\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi_t^2)}^{\rho(\Phi_t^2)}$ in the expression \eqref{bad1} for $B^{\mbox{\tiny\rm I}}_t$ in \eqref{etaleta} and in the expression \eqref{bad6} for $B^{\mbox{\tiny\rm IV}}_t$ in \eqref{xinax}, ${\left(\L^1\nabla_{\L}^1\right)}_{\sigma(\Phi_t^2)}^{\sigma(\Phi_t^2)}$ in the expression \eqref{bad4} for $\bar B^{\mbox{\tiny\rm II}}_t$ in the fifth term in \eqref{xinax}, as well as ${\left(\nabla_{\L}^1\L^1\right)}_{\Psi_p^1}^{L_p^1\setminus\Psi_p^1}$ in the expression \eqref{bad1eq} for $B^{\mbox{\tiny\rm III}}_t$ in \eqref{etaleta} vanish. Further, the contributions of $B^{\mbox{\tiny\rm II}}_t$ to \eqref{etaleta} and to \eqref{xinay} cancel each other for any $t$ such that $\beta_p^1>\beta_t^2$, while the contributions of $\bar B^{\mbox{\tiny\rm II}}_t$ to \eqref{etareta} and to \eqref{xinax} cancel each other for any $t$ such that $\bar\alpha_{p-1}^1>\bar\alpha_t^2$. Consequently, we arrive at \begin{multline}\label{zone2} \sum \{\bar B^{\mbox{\tiny\rm IV}}_t-\bar B^{\mbox{\tiny\rm I}}_t :\ \bar\alpha_{p-1}^1>\bar\alpha_t^2, \bar\beta_{p-1}^1<\bar\beta_t^2 \} +\sum \{B^{\mbox{\tiny\rm II}}_{t+1}-\bar B^{\mbox{\tiny\rm I}}_{t} :\ \bar\alpha_{p-1}^1>\bar\alpha_t^2, \bar\beta_{p-1}^1=\bar\beta_t^2 \}\\ +\sum \{B^{\mbox{\tiny\rm II}}_{t+1} :\ \bar\alpha_{p-1}^1<\bar\alpha_{t}^2, \bar\beta_{p-1}^1=\bar\beta_t^2 \} +\sum \{\bar B^{\mbox{\tiny\rm II}}_{t}-\bar B^{\mbox{\tiny\rm III}}_{t} :\ \bar\alpha_{p-1}^1=\bar\alpha_{t}^2, \bar\beta_{p-1}^1>\bar\beta_t^2 \}\\ +\sum \{\bar B^{\mbox{\tiny\rm II}}_{t}+\bar B^{\mbox{\tiny\rm IV}}_t-\bar B^{\mbox{\tiny\rm III}}_t :\ \bar\alpha_{p-1}^1=\bar\alpha_t^2, \bar\beta_{p-1}^1<\bar\beta_t^2 \}\\ +\sum \{\bar B^{\mbox{\tiny\rm II}}_{t}+B^{\mbox{\tiny\rm II}}_{t+1}-\bar B^{\mbox{\tiny\rm III}}_{t} :\ \bar\alpha_{p-1}^1=\bar\alpha_{t}^2, \bar\beta_{p-1}^1=\bar\beta_t^2 \}. \end{multline} A direct comparison shows that \eqref{zone2} can be obtained directly from the first six terms of \eqref{zone1} via switching the roles of $B^*_t$ and $\bar B^*_t$, replacing $\beta^*_t$ with $\bar\alpha^*_t$ and $\alpha^*_t$ with $\bar\beta^*_t$, and shifting indices when necessary.
\begin{lemma}\label{zone2lemma} {\rm (i)} Expression \eqref{zone2} is given by \begin{multline*} \sum_{{{\bar\alpha_{t-1}^2<\bar\alpha_{p-1}^1}\atop {\bar\beta_{t-1}^2>\bar\beta_{p-1}^1}}} \left\langle{\left(\nabla_{\L}^1\L^1\right)}^{\sigma(\Psi_t^2)}_{\sigma(\Psi_t^2)}{\left(\nabla_{\L}^2\L^2\right)}^{\Psi_t^2}_{\Psi_t^2}\right\rangle +\sum_{{{\bar\alpha_{t-1}^2\ne\bar\alpha_{p-1}^1}\atop {\bar\beta_{t-1}^2=\bar\beta_{p-1}^1}}} \left\langle{\left(\nabla_{\L}^1\L^1\right)}^{\Psi_p^1}_{\Psi_p^1}{\left(\nabla_{\L}^2\L^2\right)}^{\Psi_t^2}_{\Psi_t^2}\right\rangle\\ +\sum_{{{\bar\alpha_{t-1}^2=\bar\alpha_{p-1}^1}\atop {\bar\beta_{t-1}^2<\bar\beta_{p-1}^1}}} \left\langle{(\L^2)}^{L_{t-1}^2}_{\bar K_{t-1}^2}{\left(\nabla_{\L}^2\right)}^{\bar K_{t-1}^2}_{L_{t-1}^2}\right\rangle +\sum_{{{\bar\alpha_{t-1}^2=\bar\alpha_{p-1}^1}\atop {\bar\beta_{t-1}^2\ge\bar\beta_{p-1}^1}}} \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\bar K_{p-1}^1}^{\bar K_{p-1}^1}{\left(\L^2\nabla_{\L}^2\right)}_{\bar K_{t-1}^2}^{\bar K_{t-1}^2}\right\rangle \\ -\sum_{{{\bar\alpha_{t-1}^2=\bar\alpha_{p-1}^1}\atop {\bar\beta_{t-1}^2\ge\bar\beta_{p-1}^1}}} \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\sigma(\bar L_{t-1}^2\setminus\Psi_{t}^2)}^{\sigma(\bar L_{t-1}^2\setminus\Psi_{t}^2)} {\left(\nabla_{\L}^2\L^2\right)}_{\bar L_{t-1}^2\setminus\Psi_{t}^2}^{\bar L_{t-1}^2\setminus\Psi_{t}^2}\right\rangle +{\sum_{{{\bar\alpha_{t-1}^2=\bar\alpha_{p-1}^1}\atop {\bar\beta_{t-1}^2=\bar\beta_{p-1}^1}}}}^{\hskip -9pt{\rm l}} \left\langle{(\L^2)}^{L_{t-1}^2}_{\Phi_{t-1}^2}{\left(\nabla_{\L}^2\right)}^{\Phi_{t-1}^2}_{L_{t-1}^2}\right\rangle\\ +{\sum_{{{\bar\alpha_{t-1}^2=\bar\alpha_{p-1}^1}\atop {\bar\beta_{t-1}^2=\bar\beta_{p-1}^1}}}}^{\hskip -9pt{\rm l}} \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\bar L_{p-1}^1}^{\bar L_{p-1}^1}{\left(\nabla_{\L}^2\L^2\right)}_{\bar L_{t-1}^2}^{\bar L_{t-1}^2}\right\rangle -{\sum_{{{\bar\alpha_{t-1}^2=\bar\alpha_{p-1}^1}\atop {\bar\beta_{t-1}^2=\bar\beta_{p-1}^1}}}}^{\hskip -9pt {\rm l}} \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\bar K_{p-1}^1}^{\bar K_{p-1}^1}{\left(\L^2\nabla_{\L}^2\right)}_{\bar K_{t-1}^2}^{\bar K_{t-1}^2}\right\rangle, \end{multline*} where $\sum^{\rm l}$ is taken over the cases when the exit point of $Y_{I_{t-1}^2}^{J_{t-1}^2}$ lies to the left of the exit point of $Y_{I_{p-1}^1}^{J_{p-1}^1}$.
{\rm (ii)} Each summand in the expression above is a constant. \end{lemma}
\begin{proof} The contributions of the terms in \eqref{zone2} can be obtained from the computation of the contributions of the corresponding terms in \eqref{zone1} via a formal process, which replaces $K_*$, $L_*$, $\bar K_{*}$, $\bar L_{*}$, $\Phi_*$, $\Psi_*$, $\alpha_*$, $\beta_*$, $\bar\alpha_*$, $\bar\beta_*$ and $\sum^{\rm a}$ by $\bar L_{*-1}$, $\bar K_{*-1}$, $L_{*}$, $K_{*}$, $\Psi_*$, $\Phi_{*-1}$, $\bar\beta_{*-1}$, $\bar\alpha_{*-1}$, $\beta_*$, $\alpha_*$ and $\sum^{\rm l}$, respectively, and interchanges $\rho$ and $\sigma$. Besides, matrix multiplication from the right should be replaced by the multiplication from the left, and the upper and lower indices should be interchanged.
As an example of this formal process, let us consider the computation of the contribution of the fourth term in \eqref{zone2}. First observe, that the expression for $B^{\mbox{\tiny\rm II}}_t-B^{\mbox{\tiny\rm III}}_t$ in \eqref{bmicinit} is transformed to \begin{equation*} \left\langle {(\L^2)}_{\Phi_{t-1}^2}^{L_{t-1}^2} {\left(\nabla_{\L}^2\right)}_{L_{t-1}^2}^{\Phi_{t-1}^2} {\left(\L^1\nabla_{\L}^1\right)}_{\Phi_{p-1}^1}^{\Phi_{p-1}^1} \right\rangle -\left\langle {(\L^2)}_{\Phi_{t-1}^2}^{\bar L_{t-1}^2} {\left(\nabla_{\L}^2\right)}_{\bar L_{t-1}^2}^{\bar K_{t-1}^2\setminus\Phi_{t-1}^2} {\left(\L^1\nabla_{\L}^1\right)}^{\Phi_{p-1}^1}_{\bar K_{p-1}^1\setminus\Phi_{p-1}^1} \right\rangle, \end{equation*} which is exactly the expression for $\bar B^{\mbox{\tiny\rm II}}_{t-1}-\bar B^{\mbox{\tiny\rm III}}_{t-1}$ (note that the summation index in the statement of the lemma is shifted by one with respect to the summation index in \eqref{zone2}).
Next, we apply the transformed version of \eqref{compli} (which is identical to \eqref{compli2} with shifted indices) to the first expression above and use the transformed equality \[ {\left(\nabla_{\L}^2\right)}_{\bar L_{t-1}^2}^{\bar K_{t-1}^2\setminus\Phi_{t-1}^2} {\left(\L^1\nabla_{\L}^1\right)}^{\Phi_{p-1}^1}_{\bar K_{p-1}^1\setminus\Phi_{p-1}^1} +{\left(\nabla_{\L}^2\right)}_{\bar L_{t-1}^2}^{\Phi_{t-1}^2} {\left(\L^1\nabla_{\L}^1\right)}^{\Phi_{p-1}^1}_{\Phi_{p-1}^1} ={\left(\nabla_{\L}^2\right)}_{\bar L_{t-1}^2}^{\bar K_{t-1}^2} {\left(\L^1\nabla_{\L}^1\right)}^{\Phi_{p-1}^1}_{\bar K_{p-1}^1} \] to get \begin{equation}\label{bmic3} \bar B^{\mbox{\tiny\rm II}}_{t-1}-\bar B^{\mbox{\tiny\rm III}}_{t-1}=\left\langle{\left(\L^2\nabla_{\L}^2\right)}_{\Phi_{t-1}^2}^{\Phi_{t-1}^2} {\left(\L^1\nabla_{\L}^1\right)}_{\Phi_{p-1}^1}^{\Phi_{p-1}^1} \right\rangle-\left\langle {(\L^2)}_{\Phi_{t-1}^2}^{\bar L_{t-1}^2} {\left(\nabla_{\L}^2\right)}_{\bar L_{t-1}^2}^{\bar K_{t-1}^2} {\left(\nabla_{\L}^1\L^1\right)}^{\Phi_{p-1}^1}_{\bar K_{p-1}^1} \right\rangle, \end{equation} which is the transformed version of \eqref{bmic}. Clearly, the first term above is a constant.
Note that $\bar\beta_{p-1}^1>\bar\beta_{t-1}^2$, which is the transformed version of $\alpha_p^1>\alpha_t^2$ and means that the block $Y_{I_{p-1}^2}^{J_{p-1}^2}$ is contained completely inside the block $Y_{I_{t-1}^1}^{J_{t-1}^1}$. Similarly to Section \ref{case1}, we consider two sequences of blocks \begin{equation*} \{ X_{I_{p-1}^1}^{J_{p-1}^1}, Y_{\bar I_{p-2}^1}^{\bar J_{p-2}^1}, X_{I_{p-2}^1}^{J_{p-2}^1},\dots\}\quad\text{and} \quad\{ X_{I_{t-1}^2}^{J_{t-1}^2}, Y_{\bar I_{t-2}^2}^{\bar J_{t-2}^2}, X_{I_{t-2}^2}^{J_{t-2}^2}, \dots\} \end{equation*} and study the same four cases. Let us consider Case (i) in detail. The analogs of $\Theta_r$ and $\Xi_r$ are \begin{equation*}
\bar\Theta_{r-1}=K_{r-1}\cup\bigcup_{i=2}^{m}(\bar K_{r-i}\cup K_{r-i}), \qquad \bar\Xi_{r-1}=L_{r-1}\cup\bigcup_{i=2}^{m} (\bar L_{r-i}\cup L_{r-i}). \end{equation*} We add the correspondence $\Theta_* \mapsto \bar\Xi_{*-1}$ and $\Xi_*\mapsto \bar\Theta_{*-1}$, which turns the above relations into the transformed version of \eqref{thetaxi}.
Note that the matrix ${(\L^2)}_{\bar\Theta_{t-1}^2}^{\bar\Xi_{t-1}^2}$ coincides with a proper submatrix of ${(\L^1)}_{\bar\Theta_{p-1}^1}^{\bar\Xi_{p-1}^1}$; we denote the corresponding injection $\rho$. Clearly, \begin{equation}\label{interim3} {(\L^2)}^{\bar L_{t-1}^2}_{\Phi_{t-1}^2}{\left(\nabla_{\L}^2\right)}^{\bar K_{t-1}^2}_{\bar L_{t-1}^2}= {\left(\L^2\nabla_{\L}^2\right)}^{\bar K_{t-1}^2}_{\Phi_{t-1}^2}- {(\L^2)}^{\bar\Xi_{t-1}^2}_{\Phi_{t-1}^2}{\left(\nabla_{\L}^2\right)}^{\bar K_{t-1}^2}_{\bar\Xi_{t-1}^2}, \end{equation} which is the transformed version of \eqref{interim}.
The contribution of the first term in \eqref{interim3} to the second term in \eqref{bmic3} equals \[ -\left\langle{\left(\L^1\nabla_{\L}^1\right)}^{\Phi_{p-1}^1}_{\bar K_{p-1}^1}{\left(\L^2\nabla_{\L}^2\right)}^{\bar K_{t-1}^2}_{\Phi_{t-1}^2}\right\rangle= -\left\langle{\left(\L^1\nabla_{\L}^1\right)}^{\Phi_{p-1}^1}_{\Phi_{p-1}^1}{\left(\L^2\nabla_{\L}^2\right)}_{\Phi_{t-1}^2}^{\Phi_{t-1}^2}\right\rangle \] and cancels the contribution of the first term in \eqref{bmic3} computed above.
To find the contribution of the second term in \eqref{interim3} to the second term in \eqref{bmic3} note that \begin{equation*} {\left(\L^1\nabla_{\L}^1\right)}^{\Phi_{p-1}^1}_{\bar K_{p-1}^1}={(\L^1)}^{\bar L_{p-1}^1\cup\bar\Xi_{p-1}^1}_{\bar K_{p-1}^1} {\left(\nabla_{\L}^1\right)}^{\Phi_{p-1}^1}_{\bar L_{p-1}^1\cup\bar\Xi_{p-1}^1}, \end{equation*} which is the transformed version of \eqref{superdec}, so the contribution in question equals \begin{equation*} \left\langle{(\L^2)}^{\bar\Xi_{t-1}^2}_{\Phi_{t-1}^2}{\left(\nabla_{\L}^2\right)}^{\bar K_{t-1}^2}_{\bar\Xi_{t-1}^2} {(\L^1)}^{\bar L_{p-1}^1\cup\bar\Xi_{p-1}^1}_{\bar K_{p-1}^1} {\left(\nabla_{\L}^1\right)}^{\Phi_{p-1}^1}_{\bar L_{p-1}^1\cup\bar\Xi_{p-1}^1}\right\rangle; \end{equation*} the latter expression is the transformed version of \eqref{crosscases}. Taking into account that ${(\L^2)}^{\bar\Xi_{t-1}^2}_{\Phi_{t-1}^2}={(\L^1)}^{\rho(\bar\Xi_{t-1}^2)}_{\Phi_{p-1}^1}$, ${(\L^2)}^{\bar\Xi_{t-1}^2}_{\bar\Theta_{t-1}^2\setminus\Phi_{t-1}^2}= {(\L^1)}^{\rho(\bar\Xi_{t-1}^2)}_{\bar\Theta_{p-1}^1\setminus\Phi_{p-1}^1}$ and that \begin{equation*} {\left(\nabla_{\L}^1\right)}^{\Phi_{p-1}^1}_{\bar L_{p-1}^1\cup\bar\Xi_{p-1}^1}{(\L^1)}^{\rho(\bar\Xi_{t-1}^2)}_{\Phi_{p-1}^1}=
{\left(\nabla_{\L}^1\L^1\right)}^{\rho(\bar\Xi_{t-1}^2)}_{\bar L_{p-1}^1\cup\bar\Xi_{p-1}^1}- {\left(\nabla_{\L}^1\right)}_{\bar L_{p-1}^1\cup\bar\Xi_{p-1}^1}^{\bar\Theta_{p-1}^1\setminus\Phi_{p-1}^1} {(\L^1)}^{\rho(\bar\Xi_{t-1}^2)}_{\bar\Theta_{p-1}^1\setminus\Phi_{p-1}^1}, \end{equation*} which is the transformed version of \eqref{twoterm}, this contribution can be rewritten as \begin{multline*} \left\langle {(\L^1)}^{\bar L_{p-1}^1\cup\bar\Xi_{p-1}^1}_{\bar K_{p-1}^1} {\left(\nabla_{\L}^1\L^1\right)}^{\rho(\bar\Xi_{t-1}^2)}_{\bar L_{p-1}^1\cup\bar\Xi_{p-1}^1} {\left(\nabla_{\L}^2\right)}^{\bar K_{t-1}^2}_{\bar\Xi_{t-1}^2}\right\rangle\\ -\left\langle {(\L^2)}^{\bar\Xi_{t-1}^2}_{\bar\Theta_{t-1}^2\setminus\Phi_{t-1}^2} {\left(\nabla_{\L}^2\right)}^{\bar K_{t-1}^2}_{\bar\Xi_{t-1}^2} {(\L^1)}^{\bar L_{p-1}^1\cup\bar\Xi_{p-1}^1}_{\bar K_{p-1}^1} {\left(\nabla_{\L}^1\right)}_{\bar L_{p-1}^1\cup\bar\Xi_{p-1}^1}^{\bar\Theta_{p-1}^1\setminus\Phi_{p-1}^1} \right\rangle. \end{multline*} Next, by\eqref{lnal}, \[ {(\L^2)}^{\bar\Xi_{t-1}^2}_{\bar\Theta_{t-1}^2\setminus\Phi_{t-1}^2} {\left(\nabla_{\L}^2\right)}^{\bar K_{t-1}^2}_{\bar\Xi_{t-1}^2}= {\left(\L^2\nabla_{\L}^2\right)}^{\bar K_{t-1}^2}_{\bar\Theta_{t-1}^2\setminus\Phi_{t-1}^2}=0, \] since the rows $\bar K_{t-1}^2$ lie above $\bar\Theta_{t-1}^2\setminus\Phi_{t-1}^2$.
Finally, by \eqref{lnal}, \[ {\left(\nabla_{\L}^1\L^1\right)}^{\rho(\bar\Xi_{t-1}^2)}_{\bar L_{p-1}^1\cup\bar\Xi_{p-1}^1}= \begin{bmatrix} 0\\ \mathbf 1 \\ 0\end{bmatrix}, \] where the unit block occupies the rows and the columns $\rho(\bar\Xi_{t-1}^2)$. Therefore, the remaining contribution equals \[ \left\langle {(\L^1)}^{\rho(\bar\Xi_{t-1}^2)}_{\bar K_{p-1}^1} {\left(\nabla_{\L}^2\right)}^{\bar K_{t-1}^2}_{\bar\Xi_{t-1}^2} \right\rangle= \left\langle {(\L^2)}_{\bar\Xi_{t-1}^2}^{\bar K_{t-1}^2}{\left(\nabla_{\L}^2\right)}_{\bar K_{t-1}^2}^{\bar\Xi_{t-1}^2}\right\rangle= \left\langle {(\L^2)}_{L_{t-1}^2}^{\bar K_{t-1}^2}{\left(\nabla_{\L}^2\right)}_{\bar K_{t-1}^2}^{L_{t-1}^2}\right\rangle, \] which is a constant via Lemma \ref{partrace} an yields the third term in the statement of the lemma.
\end{proof}
\section{The quiver}\label{sec:quiver}
The goal of this Section is the proof of Theorem~\ref{quiver}.
\subsection{Preliminary considerations} Consider an arbitrary ordering on the set of vertices of the quiver $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ in which all mutable vertices precede all frozen vertices. Let $B_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ be the exchange matrix that encodes $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ under this ordering, and let $\Omega_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ be the (skew-symmetric) matrix of the constants $\{\log f^1, \log f^2\}$, $f^1, f^2\in F_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, provided $F_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ has the same ordering. Then by \cite[Theorem 4.5]{GSVb}, to prove Theorem~\ref{quiver} it suffices to check that \begin{equation*} B_{\bfG^{{\rm r}},\bfG^{{\rm c}}}\Omega_{\bfG^{{\rm r}},\bfG^{{\rm c}}}=\begin{bmatrix}\lambda \mathbf 1 & 0\end{bmatrix} \end{equation*} for some $\lambda\ne0$. In more detail, denote $\omega^{{\hat\imath}\hat\jmath}_{rs}=\{\log f_{rs}, \log f_{{\hat\imath}\hat\jmath}\}$, then the above equation can be rewritten as \begin{equation}\label{bomega} \sum_{(i,j)\to(r,s)}\omega^{{\hat\imath}\hat\jmath}_{rs}- \sum_{(r,s)\to(i,j)}\omega^{{\hat\imath}\hat\jmath}_{rs}= \begin{cases} \lambda\ \quad&\text{for $({\hat\imath},\hat\jmath)=(i,j)$,}\\ 0\ \quad & \text{otherwise} \end{cases} \end{equation} for all pairs $(i,j), ({\hat\imath},\hat\jmath)$ such that $f_{ij}$ is not frozen. By the definition of the quiver $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ (see Section \ref{thequiver}), a non-frozen vertex can have degree six, five, four, or three. Consider first the case of degree six. All possible neighborhoods of a vertex in this case are shown in Fig.~\ref{fig:ijnei}, Fig.~\ref{fig:1jnei}(a), Fig.~\ref{fig:i1nei}(a), Fig.~\ref{fig:njnei}(a), and Fig.~\ref{fig:innei}(a).
Consequently, the left hand side of \eqref{bomega} for $1<i,j<n$ can be rewritten as \begin{multline}\label{neisum} (\omega^{{\hat\imath}\hat\jmath}_{i-1,j}-\omega^{{\hat\imath}\hat\jmath}_{i,j+1})-(\omega^{{\hat\imath}\hat\jmath}_{i-1,j-1}-\omega^{{\hat\imath}\hat\jmath}_{i,j})- (\omega^{{\hat\imath}\hat\jmath}_{i,j}-\omega^{{\hat\imath}\hat\jmath}_{i+1,j+1})+(\omega^{{\hat\imath}\hat\jmath}_{i,j-1}-\omega^{{\hat\imath}\hat\jmath}_{i+1,j})\\ =\delta_{ij}^1-\delta_{ij}^2-\delta_{ij}^3+\delta_{ij}^4, \end{multline} see Fig.~\ref{fig:ijnei}. In other words, the neighborhood of $(i,j)$ is covered by the union of four pairs of vertices, and the contribution $\delta_{ij}^k$ of each pair is the difference of the corresponding values of $\omega$. More exactly, the first pair consists of the vertices to the north and to the east of $(i,j)$, the second pair consists of the vertex to the north-west of $(i,j)$ and of $(i,j)$ itself, the third pair consists of $(i,j)$ itself and of the vertex to the south-east of $(i,j)$, and the fourth pair consists of the vertices to the west and to the south of $(i,j)$.
It is easy to see that in all other cases of degree six, the left hand side of \eqref{bomega} can be rewritten in a similar way. For example, for $i=1$, an analog of \eqref{neisum} holds with $\delta_{1j}^1=\omega^{{\hat\imath}\hat\jmath}_{n,{\gamma^\ec}^*(j-1)+1}-\omega^{{\hat\imath}\hat\jmath}_{1,j+1}$ and $\delta_{1j}^2=\omega^{{\hat\imath}\hat\jmath}_{n,{\gamma^\ec}^*(j-1)}-\omega^{{\hat\imath}\hat\jmath}_{1j}$, see Fig.~\ref{fig:1jnei}(a).
Further, consider the case of degree five. All possible neighborhoods of a vertex in this case are shown in Fig.~\ref{fig:1jnei}(b), Fig.~\ref{fig:i1nei}(b), Fig.~\ref{fig:njnei}(b,c), Fig.~\ref{fig:innei}(b,c), Fig.~\ref{fig:1nnei}(a), Fig.~\ref{fig:n1nei}(a), and Fig.~\ref{fig:nnnei}(a). Direct inspection of all this cases shows that the lower vertex is missing either in the first pair (Fig.~\ref{fig:1jnei}(b), Fig.~\ref{fig:innei}(c), and Fig.~\ref{fig:1nnei}(a)), or in the third pair (Fig.~\ref{fig:njnei}(b), Fig.~\ref{fig:innei}(b), and Fig.~\ref{fig:nnnei}(a)), or in the fourth pair Fig.~\ref{fig:i1nei}(b), Fig.~\ref{fig:njnei}(c), and Fig.~\ref{fig:n1nei}(a)). In all these cases the remaining function in a deficient pair is a minor of size one, and hence all the above relations will remain valid if the missing function in the deficient pair is replaced by $f=1$ (understood as a minor of size zero).
Similarly, in the case of degree four the are two deficient pairs (any two of the pairs 1, 3, and 4), and in the case of degree three, all three pairs are deficient. However, adding at most three dummy functions $f=1$ as explained above, we can always rewrite \eqref{bomega} as \begin{equation}\label{normneisum} \Delta_{ij}=\delta_{ij}^1-\delta_{ij}^2-\delta_{ij}^3+\delta_{ij}^4= \begin{cases} \lambda\ \quad&\text{for $({\hat\imath},\hat\jmath)=(i,j)$}\\ 0\ \ \quad& \text{otherwise.} \end{cases} \end{equation}
Equation \eqref{normneisum} can be obtained as the restriction to the diagonal $X=Y$ of a similar equation in the double. Namely, assume that ${\hat\imath}\ne\hat\jmath$, $r\ne s$, and put ${\tt w}_{rs}^{{\hat\imath}\hat\jmath}= \{\log {\tt f}_{rs}, \log {\tt f}_{{\hat\imath}\hat\jmath}\}^D$. If additionally $1<i,j<n$ and $i\ne j, j\pm 1$, we define \begin{align*} {\tt d}_{ij}^1&={\tt w}^{{\hat\imath}\hat\jmath}_{i-1,j}-{\tt w}^{{\hat\imath}\hat\jmath}_{i,j+1}, \qquad {\tt d}_{ij}^2={\tt w}^{{\hat\imath}\hat\jmath}_{i-1,j-1}-{\tt w}^{{\hat\imath}\hat\jmath}_{i,j},\\ {\tt d}_{ij}^3&={\tt w}^{{\hat\imath}\hat\jmath}_{i,j}-{\tt w}^{{\hat\imath}\hat\jmath}_{i+1,j+1}, \qquad {\tt d}_{ij}^4={\tt w}^{{\hat\imath}\hat\jmath}_{i,j-1}-{\tt w}^{{\hat\imath}\hat\jmath}_{i+1,j}. \end{align*} If $i$ or $j$ equals $1$ or $n$, the above definition of ${\tt d}_{ij}^k$ should be modified similarly to the modification of $\delta_{ij}^k$ explained above. It follows immediately from \eqref{f_ij_gen}, \eqref{twof_ii} that each ${\tt d}_{ij}^k$ is a difference $\{\log {\tt f}_{i^kj^k},\log {\tt f}_{{\hat\imath}\hat\jmath}\}^D-\{\log \tilde {\tt f}_{i^kj^k},\log {\tt f}_{{\hat\imath}\hat\jmath}\}^D$, where ${\tt f}_{i^kj^k}$ and $\tilde {\tt f}_{i^kj^k}$ are two trailing minors of the same matrix that differ in size by one. For example, for $i=1$ we get ${\tt f}_{i^1j^1}={\tt f}_{n,{\gamma^\ec}^*(j-1)+1}$, ${\tt f}_{i^2j^2}= {\tt f}_{n,{\gamma^\ec}^*(j-1)}$, ${\tt f}_{i^3j^3}={\tt f}_{1j}$, and ${\tt f}_{i^4j^4}={\tt f}_{1,j-1}$. We say that ${\tt d}_{ij}^k$ is of $X$-{\it type\/} if the leading block of ${\tt f}_{i^kj^k}$ is an $X$-block, and of $Y$-{\it type\/} otherwise.
If $i=j+1$ then we set ${\tt f}_{i^1j^1}={\tt f}_{i-1,j}^<$. Consequently, in this case all four ${\tt d}_{ij}^k$ are of $X$-type. Similarly, if $i=j-1$ then we set ${\tt f}_{i^4j^4}={\tt f}_{i,j-1}^>$. Consequently, in this case all four ${\tt d}_{ij}^k$ are of $Y$-type. In what follows we will use the above conventions without indicating that explicitly.
For $i\ne j$ equation \eqref{normneisum} is the restriction to the diagonal $X=Y$ of the equation \begin{equation}\label{dubneisum} {\tt D}_{ij}={\tt d}_{ij}^1-{\tt d}_{ij}^2-{\tt d}_{ij}^3+{\tt d}_{ij}^4= \begin{cases} \lambda\ \quad&\text{for $({\hat\imath},\hat\jmath)=(i,j)$,}\\ 0\ \ \quad&\text{otherwise} \end{cases} \end{equation} in the Drinfeld double. Note that all the quantities involved in the above equation are defined unambiguously.
The case $i=j$ requires a more delicate treatment. It is impossible to fix a choice of ${\tt f}_{i^2j^2}$ and ${\tt f}_{i^3j^3}$ in such a way that \eqref{dubneisum} is satisfied. Consequently, to get \eqref{normneisum}, we treat each contribution to ${\tt D}_{ij}$ computed in Section \ref{sec:basis} separately, and restrict it to the diagonal $X=Y$. The obtained restrictions are combined in a proper way to get $\Delta_{ij}$ and to prove \eqref{normneisum} directly. In more detail, we either set ${\tt f}_{i^2j^2}={\tt f}_{i-1,j-1}^<$ and ${\tt f}_{i^3j^3}={\tt f}_{ij}^>$, or ${\tt f}_{i^2j^2}={\tt f}_{i-1,j-1}^>$ and ${\tt f}_{i^3j^3}={\tt f}_{ij}^<$. In the former case ${\tt d}_{ij}^2$ and ${\tt d}_{ij}^4$ are of $X$-type and ${\tt d}_{ij}^1$ and ${\tt d}_{ij}^3$ are of $Y$-type, while in the latter case ${\tt d}_{ij}^3$ and ${\tt d}_{ij}^4$ are of $X$-type and ${\tt d}_{ij}^1$ and ${\tt d}_{ij}^2$ are of $Y$-type. Note that in both cases the restriction to the diagonal yields the same pair of functions.
Similarly, in the case ${\hat\imath}=\hat\jmath$ we set either ${\tt f}^2={\tt f}_{{\hat\imath}\hat\jmath}^<$ or ${\tt f}^2={\tt f}_{{\hat\imath}\hat\jmath}^>$, depending on the choice of the corresponding ${\tt f}^1$, so that ${\tt f}^1$ and ${\tt f}^2$ have the same type.
\subsection{Diagonal contributions} \label{trivicon} Recall that the bracket in the double is computed via equation \eqref{bra}. In this section we find the contribution of the fist five terms in \eqref{bra} to ${\tt D}_{ij}$.
\begin{proposition}\label{ftcon} The contribution of the first term in \eqref{bra} to ${\tt D}_{ij}$ vanishes. \end{proposition}
\begin{proof} Similarly to operators $E_L$ and $E_R$ defined in section \ref{sec:bra}, define operators $\bar E_L$ and $\bar E_R$ via $\bar E_L=\nabla_X X-\nabla_Y Y$ and $\bar E_R=X\nabla_X-Y\nabla_Y$.
Note that by \eqref{R0L12}, \eqref{R0L}, the first term in \eqref{bra} can be rewritten as \begin{multline}\label{ft} \left\langle R_0^{{\rm c}}(E_L^1),E_L^2\right\rangle=\left\langle \left(\xi_L^1\right)_0, A_L^2\right\rangle+ \left\langle \left(\eta_L^1\right)_0, B_L^2\right\rangle+\operatorname{Tr} (E_L^1)\cdot p_L^2\\ +\operatorname{Tr}\left(\frac1{1-{\gamma^\ec}^*}\eta_L^1\right)\cdot q_L^2-\operatorname{Tr}\left(\frac1{1-{\gamma^\ec}}\xi_L^1\right)\cdot q_L^2- \operatorname{Tr}(\bar E_L^1)\cdot q_L^2, \end{multline} where $A_L^2$ and $B_L^2$ are matrices depending only on ${\tt f}^2$ and $p_L^2$ and $q_L^2$ are functions depending only on ${\tt f}^2$.
\begin{lemma}\label{econ} The contribution of the third term in \eqref{ft} to any one of ${\tt d}_{ij}^k$, $1\le k\le 4$, equals~$p_L^2$. \end{lemma}
\begin{proof} For any ${\tt f}$, \[ \operatorname{Tr}(E_L \log{\tt f})=\frac1{{\tt f}}\sum_{i,j=1}^n\left(\frac{\partial{\tt f}}{\partial x_{ij}} x_{ij}
+\frac{\partial{\tt f}}{\partial y_{ij}}y_{ij}\right)=\left.\frac{d}{dt}\right|_{t=1}\log{\tt f}(tX,tY). \] If ${\tt f}$ is a homogeneous polynomial, then the above expression equals its total degree. Recall that ${\tt f}_{i^kj^k}$ satisfies this condition, and that ${\operatorname{deg}} {\tt f}_{i^kj^k}-{\operatorname{deg}}\tilde{\tt f}_{i^kj^k}=1$. \end{proof}
\begin{lemma}\label{barecon} The contribution of the sixth term in \eqref{ft} to any one of ${\tt d}_{ij}^k$, $1\le k\le 4$, equals~$q_L^2$ if ${\tt d}_{ij}^k$ is of $X$-type and $-q_L^2$ otherwise. \end{lemma}
\begin{proof} For any ${\tt f}$, \[ \operatorname{Tr}(\bar E_L \log{\tt f})=\frac1{{\tt f}}\sum_{i,j=1}^n\left(\frac{\partial{\tt f}}{\partial x_{ij}} x_{ij}
-\frac{\partial{\tt f}}{\partial y_{ij}}y_{ij}\right)=\left.\frac{d}{dt}\right|_{t=1}\log{\tt f}(tX,t^{-1}Y). \] If ${\tt f}$ is a homogeneous polynomial both in $x$-variables and in $y$-variables, then the above expression equals ${\operatorname{deg}}_x{\tt f}-{\operatorname{deg}}_y{\tt f}$. Recall that ${\tt f}_{i^kj^k}$ satisfies this condition and that ${\operatorname{deg}}_x{\tt f}_{i^kj^k}-{\operatorname{deg}}_x\tilde{\tt f}_{i^kj^k}$ equals~$1$ if ${\tt f}_{i^kj^k}$ is of $X$-type and~$0$ if it is of $Y$-type, while ${\operatorname{deg}}_y{\tt f}_{i^kj^k}-{\operatorname{deg}}_y\tilde{\tt f}_{i^kj^k}$ equals~$0$ if ${\tt f}_{i^kj^k}$ is of $X$-type and~$1$ if it is of $Y$-type. \end{proof}
Recall that every point of a nontrivial $X$-run except for the last point belongs to $\Gamma_1$. We denote by $\mathring{\Gamma}_1$ the union of all nontrivial $X$-runs, and by ${\mathring{\gamma}}$ the extension of $\gamma$ that takes the last point of a nontrivial $X$-run $\Delta$ to the last point of $\gamma(\Delta)$. In a similar way we define $\mathring{\Gamma}_2$ and ${\mathring{\gamma}}^*$.
\begin{lemma}\label{xietacon} {\rm (i)} The contribution of the first term in \eqref{ft} to ${\tt d}_{ij}^3$ equals $(A_L^2)_{jj}$ if ${\tt d}_{ij}^3$
is of $Y$-type, $(A_L^2)_{{{\cgamma^{\rm c}}}(j){{\cgamma^{\rm c}}}(j)}-|\Delta(j)|^{-1}\sum_{k\in\Delta(j)}(A_L^2)_{kk}$ if ${\tt d}_{ij}^3$ is of $X$-type and $j\in\mathring{\Gamma}_1^{\rm c}$, and $0$ otherwise.
{\rm (ii)} The contribution of the second term in \eqref{ft} to ${\tt d}_{ij}^3$ equals $(B_L^2)_{jj}$ if ${\tt d}_{ij}^3$
is of $X$-type, $(B_L^2)_{{\mathring{\gamma}}^{{\rm c}*}(j){\mathring{\gamma}}^{{\rm c}*}(j)}-|\bar\Delta(j)|^{-1}\sum_{k\in\bar\Delta(j)}(B_L^2)_{kk}$ if ${\tt d}_{ij}^3$ is of $Y$-type and $j\in\mathring{\Gamma}_2^{\rm c}$, and $0$ otherwise. \end{lemma}
\begin{proof} (i) Define an $n\times n$ matrix $J_m(t)$ as the identity matrix with the entry $(m,m)$ replaced by $t$, and set $X_m(t)=X J_m(t)$, $Y_m(t)=Y J_m(t)$. By the definition of $\mathring{\xi}_L$, for any ${\tt f}$ one has \begin{align*} (\mathring{\xi}_L \log{\tt f})_{ll}=&\frac1{\tt f}\sum_{i=1}^n\frac{\partial{\tt f}}{\partial x_{i{\mathring{\gamma}}^{{\rm c}*}(l)}}x_{i{\mathring{\gamma}}^{{\rm c}*}(l)}+ \frac1{\tt f}\sum_{i=1}^n\frac{\partial{\tt f}}{\partial y_{il}}y_{il}\\
=&\left.\frac{d}{dt}\right|_{t=1}\log{\tt f}(X_{{{\mathring{\gamma}}}^{{\rm c}*}(l)}(t),Y_l(t)). \end{align*} If ${\tt f}$ is a minor of a matrix ${\mathcal L}\in{\mathbf L}\cup\{X,Y\}$, then the above expression equals the total number of columns $l$ in all column $Y$-blocks involved in this minor plus the total number of columns ${\mathring{\gamma}}{^{\rm c}*}(l)$ in all column $X$-blocks involved in this minor (note that $l\ne {\mathring{\gamma}}^{{\rm c}*}(l)$, and hence all such columns are different). Recall that the minors ${\tt f}_{i^3j^3}={\tt f}_{ij}$ and $\tilde{\tt f}_{i^3j^3}$ differ in size by one, and that the column missing in the latter minor is $j$. Consequently, if ${\tt d}_{ij}^3$ is of $Y$-type, $(\mathring{\xi}_L \log{\tt f}_{i^3j^3})_{ll}- (\mathring{\xi}_L \log\tilde{\tt f}_{i^3j^3})_{ll}$ equals~$1$ if $l=j$, which yields $(A_L^2)_{jj}$, and vanishes otherwise. Similarly, if ${\tt d}_{ij}^3$ is of $X$-type, this difference equals~$1$ if $j\in\mathring{\Gamma}_1^{\rm c}$ and $l={{\cgamma^{\rm c}}}(j)$, which yields $(A_L^2)_{{{\cgamma^{\rm c}}}(j){{\cgamma^{\rm c}}}(j)}$, and vanishes otherwise. Finally, the additional term
$-|\Delta(j)|^{-1}\sum_{k\in\Delta(j)}(A_L^2)_{kk}$ stems from the difference between $(\xi_L\log{\tt f})_0$ and $(\mathring{\xi}_L\log{\tt f})_0$, see Section \ref{simplega}.
(ii) The proof is similar to the proof of (i). \end{proof}
To prove Proposition~\ref{ftcon}, consider the contributions of the terms in the right hand side of \eqref{ft} to ${\tt D}_{ij}$.
Let us prove that the contributions of the first term to ${\tt d}_{ij}^1$ and ${\tt d}_{ij}^3$ cancel each other, as well as the contributions to ${\tt d}_{ij}^2$ and ${\tt d}_{ij}^4$. Assume first that $1<i<j\le n$. Clearly, in this case all ${\tt d}_{ij}^k$ are of $Y$-type, and \begin{equation}\label{ttdiden} {\tt d}_{ij}^1={\tt d}_{i-1,j}^3,\qquad {\tt d}_{ij}^2={\tt d}_{i-1,j-1}^3, \qquad{\tt d}_{ij}^4={\tt d}_{i,j-1}^3. \end{equation} Hence by Lemma \ref{xietacon}(i), the sought for cancellations hold true, consequently, the contribution of the first term in \eqref{ft} to ${\tt D}_{ij}$ vanishes.
Assume next that $1<j<i\le n$. In this case all ${\tt d}_{ij}^k$ are of $X$-type, and \eqref{ttdiden} holds. Hence by Lemma \ref{xietacon}(i), the contribution of the first term in \eqref{ft} to ${\tt D}_{ij}$ vanishes, similarly to the previous case.
The next case is $1<i=j\le n$. In this case we choose ${\tt f}_{i^2j^2}$ and ${\tt f}_{i^3j^3}$ in such a way that ${\tt d}_{ij}^1$ and ${\tt d}_{ij}^3$ are of $Y$-type and ${\tt d}_{ij}^2$ and ${\tt d}_{ij}^4$ are of $X$-type, and \eqref{ttdiden} holds, so the contribution of the first term in \eqref{ft} to ${\tt D}_{ij}$ vanishes once again.
Assume now that $1=i<j\le n$. In this case ${\tt d}_{1j}^1$ and ${\tt d}_{1j}^2$ are of $X$-type and ${\tt d}_{1j}^3$ and ${\tt d}_{1j}^4$ are of $Y$-type. Relations \eqref{ttdiden} are replaced by \[
{\tt d}_{1j}^1={\tt d}_{nl}^3,\qquad {\tt d}_{1j}^2={\tt d}_{n,l-1}^3, \qquad{\tt d}_{1j}^4={\tt d}_{1,j-1}^3, \] where ${\gamma^\ec}(l-1)=j-1$, see Section \ref{thequiver}, and in particular, Fig.~\ref{fig:1jnei}. Consequently, ${{\cgamma^{\rm c}}}(l-1)=j-1$ and ${{\cgamma^{\rm c}}}(l)=j$, and hence by Lemma \ref{xietacon}(i), the sought for cancellations hold true.
Finally, assume that $1=j<i\le n$. In this case ${\tt d}_{i1}^1$ and ${\tt d}_{i1}^3$ are of $X$-type and ${\tt d}_{i1}^2$ and ${\tt d}_{i1}^4$ are of $Y$-type. Relations \eqref{ttdiden} are replaced by \[
{\tt d}_{i1}^1={\tt d}_{i-1,1}^3,\qquad {\tt d}_{i1}^2={\tt d}_{l-1,n}^3, \qquad{\tt d}_{i1}^4={\tt d}_{ln}^3, \] where ${\gamma^\er}(i-1)=l-1$, see Section \ref{thequiver}, and in particular, Fig.~\ref{fig:i1nei}. Consequently, by Lemma \ref{xietacon}(i), the sought for cancellations hold true.
To treat the second term in \eqref{ft} we reason exactly in the same way and use Lemma \ref{xietacon}(ii) instead.
The third term in \eqref{ft} is treated trivially with the help of Lemma \ref{econ}.
Cancellations for the fourth term follow from the cancellations for the second term established above and the fact that $\frac1{1-{\gamma^\ec}^*}$ is a linear operator. Similarly, cancellations for the fifths term follow from the cancellations for the first term established above and the fact that $\frac1{1-{\gamma^\ec}}$ is a linear operator.
Finally, the sixth term is treated similarly to the first one based on Lemma \ref{barecon}. \end{proof}
\begin{proposition}\label{stcon} The contribution of the second term in \eqref{bra} to ${\tt D}_{ij}$ vanishes. \end{proposition}
\begin{proof} The proof of this proposition is similar to the proof of Proposition \ref{ftcon} and is based on analogs of Lemmas \ref{econ}--\ref{xietacon}. Note that the analog of Lemma \ref{xietacon} claims that contributions of $(\xi_R^1)_0$ and $(\eta_R^1)_0$ to ${\tt D}_{ij}$ depend on $i$, ${\cgamma^{\rm r}}(i)$, and ${\mathring{\gamma}}^{{\rm r}*}(i)$. In the treatment of the case $1<i=j\le n$ we choose ${\tt f}_{i^2j^2}$ and ${\tt f}_{i^3j^3}$ in such a way that ${\tt d}_{ij}^1$ and ${\tt d}_{ij}^2$ are of $Y$-type and ${\tt d}_{ij}^3$ and ${\tt d}_{ij}^4$ are of $X$-type. \end{proof}
\begin{proposition}\label{tffcon} The contributions of the third, fourth, and fifth term in \eqref{bra} to ${\tt D}_{ij}$ vanish. \end{proposition}
\begin{proof} The claim for the third term essentially coincides with the similar claim for the first term in \eqref{ft}, the claim for the fourth term essentially coincides with the similar claim for the second term in \eqref{ft}, and the claim for the fifth term uses additionally the fact that $\Pi_{\hat\Gamma_1^{\rm c}}$ is a linear operator. \end{proof}
\subsection{Non-diagonal contributions} \label{nontrivicon} In this section we find the contributions of the four remaining terms in \eqref{bra} to ${\tt D}_{ij}$. More exactly, we will be dealing with the contributions of the corresponding ringed versions. The contribution of the difference between the ordinary and the ringed version to ${\tt D}_{ij}$ vanishes similarly to the contributions treated in the previous section.
\subsubsection{Case $1<j<i<n$}\label{typicase} In this case all seven functions ${\tt f}_{i^kj^k}$, $\tilde{\tt f}_{i^kj^k}$ satisfy the conditions of Case 1 in Section \ref{case1}. Consequently, the leading block of ${\tt f}_{i^1j^1}={\tt f}_{i-1,j}$ and $\tilde{\tt f}_{i^1j^1}={\tt f}_{i,j+1}$ is $X_I^J$, the leading block of ${\tt f}_{i^2j^2}={\tt f}_{i-1,j-1}$, $\tilde{\tt f}_{i^2j^2}={\tt f}_{i^3j^3}={\tt f}_{ij}$, and $\tilde{\tt f}_{i^3j^3}={\tt f}_{i+1,j+1}$ is $X_{I'}^{J'}$, and the leading block of ${\tt f}_{i^4j^4}={\tt f}_{i,j-1}$ and $\tilde{\tt f}_{i^4j^4}={\tt f}_{i+1,j}$ is $X_{I''}^{J''}$.
We have to compute the contributions of \eqref{etaleta}, \eqref{etareta}, \eqref{xinay}, and \eqref{xinax}. Note that the first term in \eqref{etareta} looks exactly the same as terms already treated in Section \ref{trivicon}, and hence its contribution to ${\tt D}_{ij}$ vanishes. The fourth term in \eqref{etareta} vanishes under the conditions of Case 1, since both ${\left(\nabla_{\L}^1\L^1\right)}_{\sigma(\bar L_t^2)}^{\sigma(\bar L_t^2)}$ and ${\left(\L^1\nabla_{\L}^1\right)}_{\sigma(\bar K_t^2)}^{\sigma(\bar K_t^2)}$ vanish. Next, the contribution of the last term in \eqref{xinay} to any one of ${\tt d}_{ij}^k$ vanishes, since the leading blocks of ${\tt f}_{i^kj^k}$ and $\tilde{\tt f}_{i^kj^k}$ coincide. The same holds true for the last term in \eqref{xinax}. Further, the contributions of the third term in \eqref{xinay} to ${\tt d}_{ij}^1$ and to ${\tt d}_{ij}^3$ coincide, as well as the contributions of this term to ${\tt d}_{ij}^2$ and to ${\tt d}_{ij}^4$, since they depend only on $j^k$, and $j^1=j^3=j$, $j^2=j^4=j-1$. The same holds true for the foutrh term in \eqref{xinay}. Similarly, the contributions of the fourth term in \eqref{xinax} to ${\tt d}_{ij}^1$ and to ${\tt d}_{ij}^2$ coincide, as well as the contributions of this term to ${\tt d}_{ij}^3$ and to ${\tt d}_{ij}^4$, since they depend only on $i^k$, and $i^1=i^2=i-1$, $i^3=i^4=i$. The same holds true for the fifth term in \eqref{xinax}.
The total contribution of all $B$-terms involved in the above formulas is given in Lemma \ref{zone1lemma}. Note that the contributions of the third, sixth, ninth and tenth terms in Lemma \ref{zone1lemma} to any one of ${\tt d}_{ij}^k$ vanish, since the dependence of all these terms on ${\tt f}^1$ is only over which blocks the summation goes. The latter fact, in turn, is completely defined by the leading block of ${\tt f}^1$, and the leading blocks of ${\tt f}_{i^kj^k}$ and $\tilde{\tt f}_{i^kj^k}$ coincide.
To proceed further assume first that $X_I^J=X_{I'}^{J'}=X_{I''}^{J''}$. Consider the first sum in the third term in \eqref{etaleta}. Each block involved in this sum contributes an equal amount to ${\tt d}_{ij}^1$ and ${\tt d}_{ij}^2$, as well as to ${\tt d}_{ij}^3$ and ${\tt d}_{ij}^4$, so the total contribution of the block vanishes. Similarly, for the second sum in the third term in \eqref{etaleta}, each block involved contributes an equal amount to ${\tt d}_{ij}^1$ and ${\tt d}_{ij}^3$, as well as to ${\tt d}_{ij}^2$ and ${\tt d}_{ij}^4$, so the total contribution of the block vanishes as well.
The first, the second, and the fifth term in Lemma \ref{zone1lemma} are treated exactly as the first sum in the third term in \eqref{etaleta}, and the fourth term, exactly as the the second sum in the third term in \eqref{etaleta}. Consequently, all these contributions vanish. We thus see that ${\tt D}_{ij}={\tt D}_{ij}[7]-{\tt D}_{ij}[8]$, where ${\tt D}_{ij}[7]$ and ${\tt D}_{ij}[8]$ are the contributions of the seventh and the eights terms in Lemma \ref{zone1lemma} to ${\tt D}_{ij}$.
To treat ${\tt D}_{ij}[7]$, recall that the sum in the seventh term is taken over the cases when the exit point of $X_{I_t^2}^{J_t^2}$ lies above the exit point of $X_{I_p^1}^{J_p^1}$. Consequently, the treatment in the cases when the exit point of ${\tt f}^2$ lies above the exit point of ${\tt f}_{i^1j^1}$ is again exactly the same as for the first sum in the third term in \eqref{etaleta}, and the corresponding contribution vanishes. If the exit point of ${\tt f}^2$ coincides with the exit point of ${\tt f}_{i^1j^1}$, that is, if $\hat\imath-\hat\jmath=i-j-1$, one has \begin{equation}\label{cont7u} {\tt D}_{ij}[7]=-{\tt d}_{ij}^2[7]-{\tt d}_{ij}^3[7]+{\tt d}_{ij}^4[7]= \begin{cases} -\#^1-1 \quad &\text{for $\hat\imath< i$},\\ -\#^1 \quad &\text{for $\hat\imath\ge i$}, \end{cases} \end{equation} where $\#^1$ is the number of non-leading blocks of ${\tt f}^2$ satisfying the corresponding conditions. If the exit point of ${\tt f}^2$ coincides with the exit point of ${\tt f}_{i^2j^2}$, that is, if $\hat\imath-\hat\jmath=i-j$, one has \begin{equation*}\label{cont7m} {\tt D}_{ij}[7]={\tt d}_{ij}^4[7]= \begin{cases} \#^2+1 \quad &\text{for $\hat\imath\le i$},\\ \#^2 \quad &\text{for $\hat\imath>i$}, \end{cases} \end{equation*} where $\#^2$ is the number of non-leading blocks of ${\tt f}^2$ satisfying the corresponding conditions. The cases when the exit point of ${\tt f}^2$ lies below the exit point of ${\tt f}_{i^2j^2}$ do not contribute to ${\tt D}_{ij}[7]$.
Similarly, the treatment of ${\tt D}_{ij}[8]$ in the cases when the exit point of ${\tt f}^2$ lies above the exit point of ${\tt f}_{i^1j^1}$ is exactly the same as for the second sum in the third term in \eqref{etaleta}, and the corresponding contribution vanishes. If the exit point of ${\tt f}^2$ coincides with the exit point of ${\tt f}_{i^1j^1}$, one has \begin{equation}\label{cont8u} {\tt D}_{ij}[8]=-{\tt d}_{ij}^2[8]-{\tt d}_{ij}^3[8]+{\tt d}_{ij}^4[8]= \begin{cases} -\#^1-1 \quad &\text{for $\hat\jmath\le j$},\\ -\#^1 \quad &\text{for $\hat\jmath> j$}, \end{cases} \end{equation} where $\#^1$ is the same as above. If the exit point of ${\tt f}^2$ coincides with the exit point of ${\tt f}_{i^2j^2}$, one has \begin{equation*}\label{cont8m} {\tt D}_{ij}[8]={\tt d}_{ij}^4[8]= \begin{cases} \#^2+1 \quad &\text{for $\hat\jmath< j$},\\ \#^2 \quad &\text{for $\hat\jmath\ge j$}, \end{cases} \end{equation*} where $\#^2$ is the same as above. The cases when the exit point of ${\tt f}^2$ lies below the exit point of ${\tt f}_{i^2j^2}$ do not contribute to ${\tt D}_{ij}[8]$.
It follows from the above discussion that for $\hat\imath-\hat\jmath=i-j-1$ \begin{equation*} {\tt D}_{ij}[7]-{\tt D}_{ij}[8]=\begin{cases} 1\quad &\text{for $\hat\imath\ge i$, $\hat\jmath\le j$},\\ -1\quad &\text{for $\hat\imath<i$, $\hat\jmath>j$},\\ 0\quad &\text{otherwise}. \end{cases} \end{equation*} Consequently, ${\tt D}_{ij}$ vanishes everywhere on the line $\hat\imath-\hat\jmath=i-j-1$. Further, for $\hat\imath-\hat\jmath=i-j$ one has \begin{equation*} {\tt D}_{ij}[7]-{\tt D}_{ij}[8]=\begin{cases} 1\quad &\text{for $\hat\imath\le i$, $\hat\jmath\ge j$},\\ -1\quad &\text{for $\hat\imath>i$, $\hat\jmath<j$},\\ 0\quad &\text{otherwise}. \end{cases} \end{equation*}
Consequently, ${\tt D}_{ij}$ vanishes everywhere on the line $\hat\imath-\hat\jmath=i-j$ except for the point $(\hat\imath,\hat\jmath)=(i,j)$, where it equals one. Therefore, for $X_I^J=X_{I'}^{J'}=X_{I''}^{J''}$ relation \eqref{dubneisum} holds with $\lambda=1$.
There are three more possibilities for relations between the blocks $X_I^J$, $X_{I'}^{J'}$, $X_{I''}^{J''}$:
a) $X_I^J\ne X_{I'}^{J'}=X_{I''}^{J''}$;
b) $X_I^J=X_{I'}^{J'}\ne X_{I''}^{J''}$;
c) $X_I^J\ne X_{I'}^{J'}\ne X_{I''}^{J''}$.
To treat each of these three one has to consider correction terms with respect to the basic case $X_I^J=X_{I'}^{J'}=X_{I''}^{J''}$. We illustrate this treatment for the first of the above possibilities.
By Lemma \ref{compar}, case a) can be further subdivided into three subcases:
a1) $I'=I$, $J'\subsetneq J$;
a2) $I'\subsetneq I$, $J'=J$;
a3) $I'\subsetneq I$, $J'\subsetneq J$.
In case a1) we have the following correction terms. For the third term in \eqref{etaleta}, there are blocks $X_{\tilde I}^{J'}$ that satisfy the summation condition $\beta_t^2<\beta_p^1$ for the pair ${\tt f}_{i^1j^1}$, $\tilde{\tt f}_{i^1j^1}$ but violate it for the other three pairs. By Lemma \ref{compar}, such blocks are characterized by conditions $\tilde I \subseteq I$, $\tilde J=J'$.
Consequently, these blocks produce the correction term \begin{equation*} -\sum_{\tilde J=J'} \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2)}^{\rho(K_t^2)}{\left(\L^2\nabla_{\L}^2\right)}_{K_t^2}^{K_t^2}\right\rangle+ \sum_{\tilde J=J'} \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\rho(L_t^2)}^{\rho(L_t^2)}{\left(\nabla_{\L}^2\L^2\right)}_{L_t^2}^{L_t^2}\right\rangle \end{equation*} to ${\tt d}_{ij}^1$.
For the first term in Lemma \ref{zone1lemma}, the correction terms are defined by the same blocks as above except for the block $X_{I'}^{J'}$ itself (because of the additional summation condition $\alpha_t^2>\alpha_p^1$). Consequently, these blocks produce the correction term \begin{equation*} \sum_{\tilde J=J'} \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi_t^2)}^{\rho(\Phi_t^2)}{\left(\L^2\nabla_{\L}^2\right)}_{\Phi_t^2}^{\Phi_t^2}\right\rangle- \sum_{\tilde J=J'\atop \tilde I=I'}\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi')}^{\rho(\Phi')}{\left(\L^2\nabla_{\L}^2\right)}_{\Phi'}^{\Phi'}\right\rangle \end{equation*} to ${\tt d}_{ij}^1$, where $\Phi'$ corresponds to the block $X_{I'}^{J'}$.
For the second term in Lemma \ref{zone1lemma}, the block $X_I^J$ violates the summation condition $\beta_t^2\ne\beta_p^1$, $\alpha_t^2=\alpha_p^1$ for the pair ${\tt f}_{i^1j^1}$, $\tilde{\tt f}_{i^1j^1}$ but satisfies it for the other three pairs. Besides, the block $X_{I'}^{J'}$ satisfies this condition for the pair ${\tt f}_{i^1j^1}$, $\tilde{\tt f}_{i^1j^1}$ but violates it for the other three pairs Consequently, these two blocks produce correction terms \begin{equation*} \sum_{\tilde J=J'\atop \tilde I=I'}\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(\Phi')}^{\rho(\Phi')}{\left(\L^2\nabla_{\L}^2\right)}_{\Phi'}^{\Phi'}\right\rangle -\sum_{\tilde J=J\atop \tilde I=I}\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\Phi}^{\Phi}{\left(\L^2\nabla_{\L}^2\right)}_{\Phi}^{\Phi}\right\rangle \end{equation*} to ${\tt d}_{ij}^1$, where $\Phi$ corresponds to the block $X_I^J$.
For the fourth term in Lemma \ref{zone1lemma}, the blocks $X_{\tilde I}^{J'}$ violate the summation condition $\beta_t^2=\beta_p^1$, $\alpha_t^2\ge \alpha_p^1$ for the pair ${\tt f}_{i^1j^1}$, $\tilde{\tt f}_{i^1j^1}$ but satisfy it for the other three pairs. Besides, the block $X_I^J$ satisfies this condition for the pair ${\tt f}_{i^1j^1}$, $\tilde{\tt f}_{i^1j^1}$ but violates it for the other three pairs. Consequently, these blocks produce correction terms \begin{equation*} -\sum_{\tilde J=J'} \left\langle{\left(\nabla_{\L}^1\L^1\right)}_{\rho(L_t^2)}^{\rho(L_t^2)}{\left(\nabla_{\L}^2\L^2\right)}_{L_t^2}^{L_t^2}\right\rangle +\sum_{\tilde J=J\atop \tilde I=I}\left\langle{\left(\nabla_{\L}^1\L^1\right)}_{L}^{L}{\left(\nabla_{\L}^2\L^2\right)}_{L}^{L}\right\rangle \end{equation*} to $d_{ij}^1$, where $L$ corresponds to the block $X_I^J$.
Summation conditions in the fifth term in Lemma \ref{zone1lemma} are exactly the same as in the fourth term. Consequently, one gets correction terms \begin{equation*} \sum_{\tilde J=J'} \left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\rho(K_t^2\setminus\Phi_t^2)}^{\rho(K_t^2\setminus\Phi_t^2)} {\left(\L^2\nabla_{\L}^2\right)}_{K_t^2\setminus\Phi_t^2}^{K_t^2\setminus\Phi_t^2}\right\rangle- \sum_{\tilde J=J\atop \tilde I=I}\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{K\setminus\Phi}^{K\setminus\Phi}{\left(\L^2\nabla_{\L}^2\right)}_{K\setminus\Phi}^{K\setminus\Phi}\right\rangle \end{equation*} to ${\tt d}_{ij}^1$, where $K$ corresponds to the block $X_I^J$.
For the seventh term in Lemma \ref{zone1lemma}, the block $X_I^J$ satisfies the summation condition $\beta_t^2=\beta_p^1$, $\alpha_t^2=\alpha_p^1$ for the pair ${\tt f}_{i^1j^1}$, $\tilde{\tt f}_{i^1j^1}$ but violates it for the other three pairs. Besides, the additional condition on the exit points excludes the diagonal $\hat\imath-\hat\jmath=i-j-1$. Consequently, this block produces correction terms \begin{equation*} \sum_{\tilde J=J\atop \tilde I=I}\left\langle{\left(\L^1\nabla_{\L}^1\right)}_{\Phi}^{\Phi}{\left(\L^2\nabla_{\L}^2\right)}_{\Phi}^{\Phi}\right\rangle +{\tt D}_{ij}[7] \end{equation*} to ${\tt d}_{ij}^1$, where ${\tt D}_{ij}[7]$ is given by \eqref{cont7u}.
For the eights term in Lemma \ref{zone1lemma}, the situation is exactly the same as for the seventh term. Consequently, one gets correction terms \begin{equation*} -\sum_{\tilde J=J\atop \tilde I=I}\left\langle{\left(\nabla_{\L}^1\L^1\right)}_{L}^{L}{\left(\nabla_{\L}^2\L^2\right)}_{L}^{L}\right\rangle-{\tt D}_{ij}[8] \end{equation*} to ${\tt d}_{ij}^1$, where ${\tt D}_{ij}[8]$ is given by \eqref{cont8u}.
It is easy to note that the correction terms listed above cancel one another (recall that vanishing of $D_{ij}[7]-D_{ij}[8]$ for $\hat\imath-\hat\jmath=i-j-1$ was already proved above), and hence relation \eqref{dubneisum} is established in the case a1). Cases a2), a3), b), and c) are treated in a similar manner.
\subsubsection{Other cases} The case $1<i<j<n$ is treated in a similar way with \eqref{etaleta} replaced by \eqref{etareta} and Lemma \ref{zone1lemma} replaced by Lemma \ref{zone2lemma}.
Consider the case $1<i=j<n$. The treatment of the first term in \eqref{etareta}, the last terms in \eqref{xinay} and \eqref{xinax}, the third, sixth, ninth and tenth terms in Lemma \ref{zone1lemma}, and the third and the sixth terms in Lemma \ref{zone2lemma} is exactly the same as in the previous section. The third and the fourth terms in \eqref{xinay}, as well as the fourth and the fifth terms in \eqref{xinax}, are treated almost in the same way as in the previous section; the only difference is an appropriate choice of the functions on the diagonal, which ensures required cancellations. To treat all the other contributions, recall that by the definition, the leading block of ${\tt f}_{ii}^<$ is $X$, and the leading block of ${\tt f}_{ii}^>$ is $Y$. Denote by $X_I^J$ the leading block of ${\tt f}_{i,i-1}$, and by $Y_{\bar I}^{\bar J}$ the leading block of ${\tt f}_{i-1,i}$. Similarly to Section \ref{typicase}, there are four possible cases: $X_I^J=X$, $Y_{\bar I}^{\bar J}=Y$; $X_I^J\ne X$, $Y_{\bar I}^{\bar J}=Y$; $X_I^J=X$, $Y_{\bar I}^{\bar J}\ne Y$; $X_I^J\ne X$, $Y_{\bar I}^{\bar J}\ne Y$.
Let us consider the first of the above four cases. Contributions of all terms except for the seventh and the eights terms in Lemmas \ref{zone1lemma} and \ref{zone2lemma} are treated in the same way as the third and the fourth terms in \eqref{xinay} above. For example, to treat the first sum in the third term in \eqref{etaleta} we choose ${\tt f}_{i^2j^2}={\tt f}_{i-1,j-1}^<$ and ${\tt f}_{i^3j^3}={\tt f}_{ij}^>$, so that this sum contributes only to $\delta_{ij}^2$ and $\delta_{ij}^4$, and the contributions cancel each other. For the remaining four terms, there is a subtlety in the case
${\hat\imath}=\hat\jmath$. We write $f_{{\hat\imath}{\hat\imath}}=\left.\frac12{\tt f}_{{\hat\imath}{\hat\imath}}^<\right|_{X=Y}+\left.\frac12{\tt f}_{{\hat\imath}{\hat\imath}}^>\right|_{X=Y}$ and note that $X$ is the only block for ${\tt f}_{{\hat\imath}{\hat\imath}}^<$ and $Y$ is the only block for ${\tt f}_{{\hat\imath}{\hat\imath}}^>$. Consequently, for ${\tt f}^2= \frac12{\tt f}_{{\hat\imath}{\hat\imath}}^<$, the terms involved in Lemma \ref{zone1lemma} contribute zero for ${\hat\imath} \ne i$ and $1/2$ for ${\hat\imath}= i$, while the terms involved in Lemma \ref{zone2lemma} contribute zero for any ${\hat\imath}$. Similarly, for ${\tt f}^2= \frac12{\tt f}_{{\hat\imath}{\hat\imath}}^>$, the terms involved in Lemma \ref{zone1lemma} contribute zero for any ${\hat\imath}$, while the terms involved in Lemma \ref{zone2lemma} contribute zero for ${\hat\imath} \ne i$ and $1/2$ for ${\hat\imath}= i$. Therefore, we get contribution $1$ for $(i,j)=({\hat\imath},\hat\jmath)$, as required. In the remaining three cases one has to consider correction terms, similarly to Section \ref{typicase}.
It remains to consider the cases when $i$ or $j$ are equal to $1$ or $n$. For example, let $1<j<i=n$ and assume that the degree of the vertex $(n,j)$ in $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ equals~6, see Fig.~\ref{fig:njnei}(a). It follows from the description of the quiver in Section \ref{thequiver} that $(n,j-1)$ is a mutable vertex. In this case the functions $\tilde{\tt f}_{i^3j^3}$ and $\tilde{\tt f}_{i^4j^4}$ satisfy conditions of Case 2 in Section~\ref{case2}, and all other functions satisfy conditions of Case 1 in Section~\ref{case1}. Consequently, the leading block of ${\tt f}_{i^1j^1}={\tt f}_{n-1,j}$ and $\tilde{\tt f}_{i^1j^1}={\tt f}_{n,j+1}$ is $X_I^J$, the leading block of ${\tt f}_{i^2j^2}={\tt f}_{n-1,j-1}$ and $\tilde{\tt f}_{i^2j^2}={\tt f}_{i^3j^3}={\tt f}_{ij}$ is $X_{I'}^{J'}$, the leading block of ${\tt f}_{i^4j^4}={\tt f}_{n,j-1}$ is $X_{I''}^{J''}$, the leading block of $\tilde{\tt f}_{i^3j^3}={\tt f}_{1,k+1}$ with $k={\gamma^\ec}(j)$ is $Y_{\bar I}^{\bar J}$, and the leading block of $\tilde{\tt f}_{i^4j^4}={\tt f}_{1k}$ is $Y_{\bar I'}^{\bar J'}$.
The treatment of the last three terms in \eqref{xinay} and the last three terms in \eqref{xinax} remains the same as in Section \ref{typicase}. To proceed further, assume that $X_I^J=X_{I'}^{J'}=X_{I''}^{J''}$ and $Y_{\bar I}^{\bar J}=Y_{\bar I'}^{\bar J'}$. In this case it is more convenient to replace \eqref{dubneisum} with ${\tt D}_{ij}={\tt d}_{ij}^1-{\tt d}_{ij}^2+{\tt d}_{ij}^{43}-\tilde{\tt d}_{ij}^{43}$, where ${\tt d}_{ij}^{43}={\tt f}_{n,j-1}-{\tt f}_{ij}$ and $\tilde{\tt d}_{ij}^{43}={\tt f}_{1k}-{\tt f}_{1,k+1}$, so that the first three terms in ${\tt D}_{ij}$ are subject to the rules of Case 1, and the last term to the rules of Case 2.
The contributions of the third, ninth and tenth terms in Lemma \ref{zone1lemma} to any one of ${\tt d}^1_{ij}$, ${\tt d}^2_{ij}$ and ${\tt d}^{43}_{ij}$ vanish for the same reason as in Section \ref{typicase}. The same holds true for the contribution of the third term in Lemma \ref{zone2lemma} to $\tilde{\tt d}^{43}_{ij}$.
The first sum in the third term in \eqref{etaleta} contributes the same amount to ${\tt d}_{ij}^1$ and ${\tt d}_{ij}^2$, and zero to ${\tt d}_{ij}^{43}$. The same holds true for the first, second and the fifth terms in Lemma \ref{zone1lemma}. The second sum in the third term in \eqref{etaleta} vanishes since $\rho(L_t^2)$ for every $X$-block of ${\tt f}^2$ such that $\beta_t^2<\beta_p^1$ lies strictly to the left of the column $j-1$.
Further, ${\left(\L^1\nabla_{\L}^1\right)}_{\sigma(\bar K_t^2)}^{\sigma(\bar K_t^2)}$ in the second sum in the fourth term of \eqref{etareta} is an identity matrix, and hence the contribution of this sum to $\tilde{\tt d}^{43}_{ij}$ vanishes, since both sides in this difference depend only on ${\tt f}^2$. The same reasoning works as well for the first, the fourth and the fifth terms in Lemma \ref{zone2lemma}, and for the first sum in the fourth term of \eqref{etareta} in the case $\bar\beta_{t-1}^2>\bar\beta_{p-1}^1$. The contribution of this sum to $\tilde{\tt d}^{43}_{ij}$ for the case $\bar\beta_{t-1}^2=\bar\beta_{p-1}^1$ cancels the contribution of the second term in Lemma \ref{zone2lemma} for the case $\bar\alpha_{t-1}^2<\bar\alpha_{p-1}^1$.
Let us consider now the contribution of the fourth term in Lemma \ref{zone1lemma}. Assume that a $t$-th $X$-block of ${\tt f}^2$ satisfies conditions $\alpha_t^2>\alpha_p^1$ and $\beta_t^2=\beta_p^1$. Consequently, the $(t-1)$-th $Y$-block of ${\tt f}^2$ satisfies conditions $\bar\alpha_{t-1}^2\ge\bar\alpha_{p-1}^1$ and $\bar\beta_{t-1}^2=\bar\beta_{p-1}^1$. Consider first the case when the inequality above is strict. If the $Y$-block in question is not the leading block of ${\tt f}^2$, then the contributions of the $X$-block to ${\tt d}_{ij}^1[4]$ and ${\tt d}_{ij}^2[4]$ cancel each other, whereas the contribution of the $X$-block to ${\tt d}_{ij}^{43}[4]$ cancels the contribution of the $Y$-block to $\tilde{\tt d}^{43}_{ij}[2]$. The same holds true if the $Y$-block is the leading block of ${\tt f}^2$ and $\hat\jmath<{\gamma^\ec}(j)$. If $\hat\jmath={\gamma^\ec}(j)$ then the contributions of the $X$-block to ${\tt d}_{ij}^2[4]$ and ${\tt d}_{ij}^{43}[4]$ vanish, whereas the contribution of the $X$-block to ${\tt d}_{ij}^1[4]$ cancels the contribution of the $Y$-block to $\tilde{\tt d}^{43}_{ij}[2]$. Finally, if $\hat\jmath>{\gamma^\ec}(j)$ then all the above contributions vanish.
Otherwise, if $\bar\alpha_{t-1}^2=\bar\alpha_{p-1}^1$, the sixth, the seventh and the eights terms in Lemma \ref{zone2lemma} contribute to both sides of $\tilde{\tt d}^{43}_{ij}$, since in both cases the exit point for ${\tt f}^2$ lies to the left of the exit point for ${\tt f}^1$. Consequently, the contributions of the sixth and the eight terms vanish, while the contribution of the $Y$-block to $\tilde{\tt d}^{43}_{ij}[7]$ equals the total contribution of the $X$-block to ${\tt d}_{ij}^1[4]$, ${\tt d}_{ij}^2[4]$ and ${\tt d}_{ij}^{43}[4]$, similarly to the previous case.
Assume now that a $t$-th $X$-block of ${\tt f}^2$ satisfies conditions $\alpha_t^2=\alpha_p^1$ and $\beta_t^2=\beta_p^1$. We distinguish the following five cases.
A. $\hat\imath-\hat\jmath>n-j+1$; consequently, the sixth, the seventh and the eights terms
in Lemma \ref{zone1lemma} do not contribute to ${\tt D}_{ij}$, since in all cases involved the exit point for ${\tt f}^2$ lies below the exit point for ${\tt f}^1$. Besides, $\bar\alpha_{t-1}^2\ge\bar\alpha_{p-1}^1$ and $\bar\beta_{t-1}^2=\bar\beta_{p-1}^1$. The treatment of this case is exactly the same as the treatment of the case
$\alpha_t^2>\alpha_p^1$ and $\beta_t^2=\beta_p^1$ above.
B. $\hat\imath-\hat\jmath=n-j+1$; consequently, $\bar\alpha_{t-1}^2=\bar\alpha_{p-1}^1$ and $\bar\beta_{t-1}^2=\bar\beta_{p-1}^1$. Similarly to the case A, the sixth, the seventh and the eights terms
in Lemma \ref{zone1lemma} do not contribute to ${\tt D}_{ij}$, since in all cases involved the exit point for ${\tt f}^2$ lies below or coincides with the exit point for ${\tt f}^1$. On the other hand, the sixth, the seventh and the eights terms in Lemma \ref{zone2lemma} contribute only to the subtrahend of $\tilde{\tt d}^{43}_{ij}$, but not to the minuend. If the $Y$-block in question is not the leading block of ${\tt f}^2$ then the contributions of the $X$-block to ${\tt d}_{ij}^1[4]$ and ${\tt d}_{ij}^2[4]$ cancel each other, the contribution of the $X$-block to ${\tt d}_{ij}^{43}[4]$ equals one, while the contributions of the $Y$-block to $\tilde{\tt d}^{43}_{ij}[6]$, $\tilde{\tt d}^{43}_{ij}[7]$ and $\tilde{\tt d}^{43}_{ij}[8]$ are equal to $n+1-\bar\alpha_{t-1}^2-{\gamma^\ec}(j)$, ${\gamma^\ec}(j)-n$ and $\bar\alpha_{t-1}^2$, respectively. Consequently, the total contribution to ${\tt D}_{ij}$ vanishes. If the $Y$-block is the leading block of ${\tt f}^2$ then the contributions of the $X$-block to ${\tt d}_{ij}^2[4]$ and ${\tt d}_{ij}^{43}[4]$ vanish. Further, if $\hat\imath>1$ then the contribution of the $X$-block to ${\tt d}_{ij}^1[4]$ vanishes as well, whereas the contributions of the $Y$-block to $\tilde{\tt d}^{43}_{ij}[6]$, $\tilde{\tt d}^{43}_{ij}[7]$ and $\tilde{\tt d}^{43}_{ij}[8]$ are equal to $n+\hat\imath-\bar\alpha_{t-1}^2-\hat\jmath$, $\hat\jmath-n-1$ and $\bar\alpha_{t-1}^2+1-\hat\imath$, respectively. Consequently, the total contribution to ${\tt D}_{ij}$ vanishes. Finally, if $\hat\imath=1$ then the contribution of the $X$-block to ${\tt d}_{ij}^1[4]$ equals one, whereas the contributions of the $Y$-block to $\tilde{\tt d}^{43}_{ij}[6]$, $\tilde{\tt d}^{43}_{ij}[7]$ and $\tilde{\tt d}^{43}_{ij}[8]$ are equal to $n+1-\bar\alpha_{t-1}^2-{\gamma^\ec}(j)$, ${\gamma^\ec}(j)-n$ and $\bar\alpha_{t-1}^2$, respectively, and again the total contribution to ${\tt D}_{ij}$ vanishes.
C. $\hat\imath-\hat\jmath=n-j$; consequently, $\bar\alpha_{t-1}^2=\bar\alpha_{p-1}^1$ and $\bar\beta_{t-1}^2=\bar\beta_{p-1}^1$. Here the sixth, the seventh and the eights terms
in Lemma \ref{zone2lemma} do not contribute to $\tilde{\tt d}^{43}_{ij}$, since in both cases involved the exit point for ${\tt f}^2$ lies to the right or coincides with the exit point for ${\tt f}^1$. On the other hand, the sixth, the seventh and the eighth terms in Lemma \ref{zone1lemma} do not contribute to ${\tt d}_{ij}^1$, ${\tt d}_{ij}^2$ and to the subtrahend of ${\tt d}^{43}_{ij}$, but contribute to its minuend. If the $X$-block in question is not the leading block of ${\tt f}^2$ then its contributions to ${\tt d}_{ij}^1[4]$ and ${\tt d}_{ij}^2[4]$ cancel each other, and its contribution to ${\tt d}_{ij}^{43}[4]$ equals one. The contributions of this block to ${\tt d}^{43}_{ij}[6]$, ${\tt d}^{43}_{ij}[7]$ and ${\tt d}^{43}_{ij}[8]$ are equal to $\alpha_t^2-j$, $1$ and $j-2-\alpha_t^2$, respectively. Consequently, the total contribution to ${\tt D}_{ij}$ vanishes. The same holds true if this $X$-block is the leading block of ${\tt f}^2$ and $\hat\imath<n$. If $\hat\imath=n$, and hence $\hat\jmath=j$, then its contribution to ${\tt d}_{ij}^2[4]$ and ${\tt d}_{ij}^{43}[4]$ vanish, and the contribution to ${\tt d}_{ij}^1[4]$ equals one. The contributions of this block to ${\tt d}^{43}_{ij}[6]$, ${\tt d}^{43}_{ij}[7]$ and ${\tt d}^{43}_{ij}[8]$ are equal to $\alpha_t^2-j$, $1$ and $j-1-\alpha_t^2$, respectively. Consequently, the total contribution to ${\tt D}_{ij}$ equals one. If the $Y$-block in question is the leading block of ${\tt f}^2$ then the contributions of the $X$-block to ${\tt d}_{ij}^1[4]$, ${\tt d}_{ij}^2[4]$ and ${\tt d}_{ij}^{43}[4]$ vanish, as well as the contribution of the $Y$-block to ${\tt d}^{43}_{ij}[7]$, and the contributions of $Y$-block to ${\tt d}^{43}_{ij}[6]$ and ${\tt d}^{43}_{ij}[8]$ cancel each other. Consequently, the total contribution to ${\tt D}_{ij}$ vanishes.
D. $\hat\imath-\hat\jmath=n-j-1$; consequently, $\bar\alpha_{t-1}^2\le\bar\alpha_{p-1}^1$ and $\bar\beta_{t-1}^2=\bar\beta_{p-1}^1$. Here the sixth, the seventh and the eighth terms in Lemma \ref{zone1lemma} do not contribute to ${\tt d}_{ij}^1$, but contribute to ${\tt d}_{ij}^2$ and ${\tt d}^{43}_{ij}$. Assume first that $\bar\alpha_{t-1}^2=\bar\alpha_{p-1}^1$, then the sixth, the seventh and the eights terms
in Lemma \ref{zone2lemma} do not contribute to $\tilde{\tt d}^{43}_{ij}$ similarly to case C. If the $X$-block in question is not the leading block of ${\tt f}^2$ then its contributions to ${\tt d}_{ij}^1[4]$ and ${\tt d}_{ij}^2[4]$ cancel each other, and its contribution to ${\tt d}_{ij}^{43}[4]$ equals one. Further, its contributions to ${\tt d}_{ij}^2[6]$ and ${\tt d}_{ij}^{43}[6]$ vanish, and contributions to ${\tt d}_{ij}^2[8]$ and ${\tt d}_{ij}^{43}[8]$ cancel each other. Finally, its contribution to ${\tt d}^2_{ij}[7]$ cancels the contribution to ${\tt d}_{ij}^{43}[4]$, and hence the total contribution to ${\tt D}_{ij}$ vanishes. The same holds true if the $X$-block is the leading block of ${\tt f}^2$ and $\hat\imath>n-1$. If $\hat\imath=n-1$ the contributions to ${\tt d}_{ij}^2[4]$ and ${\tt d}_{ij}^{43}[4]$ vanish and the contributions to ${\tt d}_{ij}^1[4]$ and ${\tt d}^2_{ij}[7]$ cancel each other. If $\hat\imath=n$, or if the $Y$-block in question is the leading block of ${\tt f}^2$ then all the above mentioned contributions vanish. The case $\bar\alpha_{t-1}^2<\bar\alpha_{p-1}^1$ is similar; additionally to the above, the contribution of the $Y$-block to $\tilde{\tt d}_{ij}^{43}$ vanishes.
E. $\hat\imath-\hat\jmath<n-j-1$; consequently, $\bar\alpha_{t-1}^2\le\bar\alpha_{p-1}^1$ and $\bar\beta_{t-1}^2=\bar\beta_{p-1}^1$. This case is similar to the previous one, with the additional cancellation of the contributions to ${\tt d}_{ij}^1[7]$ and ${\tt d}_{ij}^1[8]$.
Therefore, the total contribution to ${\tt D}_{ij}$ vanishes in all cases except for the case $(\hat\imath,\hat\jmath)=(n,j)$ when it is equal one, hence under the assumptions $X_I^J=X_{I'}^{J'}=X_{I''}^{J''}$ and $Y_{\bar I}^{\bar J}=Y_{\bar I'}^{\bar J'}$ relation \eqref{dubneisum} holds with $\lambda=1$. If these assumptions are violated, one has to consider correction terms similarly to Section \ref{typicase}.
\section{Regularity check and the toric action} \label{sec:regtor} The goal of this section is threefold:
(i) to check condition (ii) in Proposition \ref{regfun} for the family $F_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$,
(ii) to prove Theorem \ref{genmainth}(iii), and
(iii) to prove Proposition \ref{frozen}.
\subsection{Regularity check} We have to prove the following statement.
\begin{theorem}\label{regneighbors} For any mutable cluster variable $f_{ij}\in F_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, the adjacent variable $f'_{ij}$ is a regular function on $\operatorname{Mat}_n$. \end{theorem}
\begin{proof} The main technical tool in the proof is the version of the Desnanot--Jacobi identity for minors of a rectangular matrix that we have used previously for the regularity check in \cite{GSVMem}. Let $A$ be an $(m-1)\times m$ matrix, and $\alpha<\beta<\gamma$ be row indices, then \begin{equation} \label{notjacobi} \det A^{\hat \alpha} \det A^{\hat \beta \hat \gamma}_{\hat \delta} + \det A^{\hat \gamma} \det A^{\hat \alpha\hat \beta}_{\hat \delta} = \det A^{\hat \beta} \det A^{\hat \alpha \hat \gamma}_{\hat \delta}, \end{equation} where ``hatted'' subscripts and superscripts indicate deleted rows and columns, respectively.
Let us assume first that the degree of $(i,j)$ equals six. Following the notation introduced in the previous section, denote by $f_{i^1j^1}$ and $\tilde f_{i^1j^1}$ the functions at the vertices to the north and to the east of $(i,j)$, respectively, by $f_{i^2j^2}$ and $\tilde f_{i^3j^3}$ the functions at the vertices to the north-west and to the south-east of $(i,j)$, respectively, and by $f_{i^4j^4}$ and $\tilde f_{i^4j^4}$ the functions at the vertices to the west and to the south of $(i,j)$, respectively. Let ${\mathcal L}$ be the matrix used to define $f_{i^2j^2}$, $f_{ij}$ and $\tilde f_{i^3j^3}$, ${\mathcal L}_+$ be the matrix used to define $f_{i^1j^1}$ and $\tilde f_{i^1j^1}$, and ${\mathcal L}_-$ be the matrix used to define $f_{i^4j^4}$ and $\tilde f_{i^4j^4}$.
Assume first that ${\operatorname{deg}} f_{ij}<{\operatorname{deg}} f_{i^1j^1}$. Define a ${\operatorname{deg}} f_{i^1j^1}\times({\operatorname{deg}} f_{i^1j^1}+1)$ matrix $A$ via $A=({\mathcal L}_+)_{[s(i^1,j^1),N({\mathcal L}_+)]}^{[s(i^1,j^1)-1,N({\mathcal L}_+)]}$. Then it is easy to see that ${\mathcal L}_{[s(i,j)-1,N({\mathcal L})]}^{[s(i,j)-1,N({\mathcal L})]}=A_{[1,{\operatorname{deg}} f_{ij}+1]}^{[1,{\operatorname{deg}} f_{ij}+1]}$, and moreover, that $A_{[1,{\operatorname{deg}} f_{ij}+1]}^{[1,{\operatorname{deg}} f_{ij}+1]}$ is a block in the block upper triangular matrix $A_{[1,{\operatorname{deg}} f_{i^1j^1}]}^{[1,{\operatorname{deg}} f_{i^1j^1}]}$. Consequently, \[ f_{i^1j^1}=\det A^{\hat 1}, \quad\tilde f_{i^1j^1}=\det A_{\hat 1}^{\hat 1\hat 2}, \quad f_{i^2j^2}\cdot\det B=\det A^{\hat m},
\quad f_{ij}\cdot\det B=\det A_{\hat 1}^{\hat 1\hat m} \] with $B=A_{[{\operatorname{deg}} f_{ij}+2,{\operatorname{deg}} f_{i^1j^1}]}^{[{\operatorname{deg}} f_{ij}+2,{\operatorname{deg}} f_{i^1j^1}]}$ and $m={\operatorname{deg}} f_{i^1j^1}+1$. Applying \eqref{notjacobi} with $\alpha=1$, $\beta=2$, $\gamma=m$, $\delta=1$, one gets \[ f_{i^1j^1}\cdot\det A_{\hat 1}^{\hat 2\hat m}+f_{i^2j^2}\cdot\det B\cdot\tilde f_{i^1j^1}= \det A^{\hat 2}\cdot f_{ij}\cdot\det B. \] Note that $\det A_{\hat 1}^{\hat 2\hat m}=\det \bar A_{\hat 1}^{\hat 2}\det B$ with $\bar A= A_{[1,{\operatorname{deg}} f_{ij}+1]}^{[1,{\operatorname{deg}} f_{ij}+1]}$, and hence \begin{equation}\label{firstdj} f_{i^1j^1}\det \bar A_{\hat 1}^{\hat 2}+f_{i^2j^2}\tilde f_{i^1j^1}=f_{ij}\det A^{\hat 2}. \end{equation}
Let now ${\operatorname{deg}} f_{ij}\ge{\operatorname{deg}} f_{i^1j^1}$. Define a $({\operatorname{deg}} f_{ij}+1)\times({\operatorname{deg}} f_{ij}+2)$ matrix $A$ via adding the column $(0,\dots,0,1)^T$ on the right to the matrix ${\mathcal L}_{[s(i,j)-1,N({\mathcal L})]}^{[s(i,j)-1,N({\mathcal L})]}$. Then it is easy to see that $({\mathcal L}_+)_{[s(i^1,j^1),N({\mathcal L}_+)]}^{[s(i^1,j^1),N({\mathcal L}_+)]}=A_{[1,{\operatorname{deg}} f_{i^1j^1}]}^{[2,{\operatorname{deg}} f_{i^1j^1}+1]}$, and moreover, that $A_{[1,{\operatorname{deg}} f_{i^1j^1}]}^{[2,{\operatorname{deg}} f_{i^1j^1}+1]}$ is a block in the block lower triangular matrix $A_{[1,{\operatorname{deg}} f_{ij}+1]}^{[2,{\operatorname{deg}} f_{ij}+2]}$. Consequently, \[ f_{i^1j^1}\cdot\det B=\det A^{\hat 1}, \quad\tilde f_{i^1j^1}\cdot\det B=\det A_{\hat 1}^{\hat 1\hat 2}, \quad f_{i^2j^2}=\det A^{\hat m}, \quad f_{ij}=\det A_{\hat 1}^{\hat 1\hat m} \] with $B=A_{[{\operatorname{deg}} f_{i^1j^1}+1,{\operatorname{deg}} f_{ij}+1]}^{[{\operatorname{deg}} f_{i^1j^1}+2,{\operatorname{deg}} f_{ij}+2]}$ and $m={\operatorname{deg}} f_{ij}+2$. Applying \eqref{notjacobi} with $\alpha=1$, $\beta=2$, $\gamma=m$, $\delta=1$, one gets \[ f_{i^1j^1}\cdot\det B\det \bar A_{\hat 1}^{\hat 2}+f_{i^2j^2}\cdot\tilde f_{i^1j^1}\cdot\det B= \det A^{\hat 2}\cdot f_{ij}, \] where $\bar A=A_{[1,{\operatorname{deg}} f_{ij}+1]}^{[1,{\operatorname{deg}} f_{ij}+1]}$ is the same as in the previous case. Note that $\det A^{\hat 2}=\det \tilde A^{\hat 2}\det B$, where $\tilde A=A_{[1,{\operatorname{deg}} f_{i^1j^1}]}^{[1,{\operatorname{deg}} f_{i^1j^1}+1]}$ is given by the same expression as the whole matrix $A$ in the previous case. Consequently, relation \eqref{firstdj} remains valid in this case as well.
To proceed further, we compare ${\operatorname{deg}} f_{ij}$ with ${\operatorname{deg}} f_{i^3,j^3}$ and consider two cases similar to the two cases above. Reasoning along the same lines, we arrive to the relation \begin{equation}\label{secdj} f_{ij}\det C_{\hat 1}^{\hat 2}+\tilde f_{i^3j^3}f_{i^4j^4}=\tilde f_{i^4j^4}\det \bar A_{\hat 1}^{\hat 2} \end{equation} with $C=({\mathcal L}_-)_{[s(i^4,j^4), N({\mathcal L}_-)]}^{[s(i^4,j^4), N({\mathcal L}_-)]}$ and $\bar A$ the same as in \eqref{firstdj}. The linear combination of \eqref{firstdj} and \eqref{secdj} with coefficients $\tilde f_{i^4j^4}$ and $f_{i^1j^1}$, respectively, yields \begin{equation}\label{onestep} f_{ij}(\tilde f_{i^4j^4}\det A^{\hat 2}-f_{i^1j^1}\det C_{\hat 1}^{\hat 2})=f_{i^2j^2}\tilde f_{i^1j^1}\tilde f_{i^4j^4}+ f_{i^1j^1}\tilde f_{i^3j^3}f_{i^4j^4}. \end{equation} Combining this with Theorem \ref{quiver} we see that $f_{ij}'=\tilde f_{i^4j^4}\det A^{\hat 2}-f_{i^1j^1}\det C_{\hat 1}^{\hat 2}$ is a regular function on $\operatorname{Mat}_n$.
For vertices of degree less than six, the claim follows from the corresponding degenerate version of \eqref{onestep}. For example, for vertices of degree five there are three possible degenerations:
(i) ${\operatorname{deg}} f_{i^1j^1}=1$, and hence $\tilde f_{i^1j^1}=1$, which corresponds to the cases shown in Fig.~\ref{fig:1jnei}(b), Fig.~\ref{fig:innei}(c) and Fig.~\ref{fig:1nnei}(a);
(ii) ${\operatorname{deg}} f_{i^4j^4}=1$, and hence $\tilde f_{i^4j^4}=1$, which corresponds to the cases shown in Fig.~\ref{fig:i1nei}(b), Fig.~\ref{fig:njnei}(c) and Fig.~\ref{fig:n1nei}(a);
(iii) ${\operatorname{deg}} f_{ij}=1$, and hence $\tilde f_{i^3j^3}=1$, which corresponds to the cases shown in Fig.~\ref{fig:njnei}(b), Fig.~\ref{fig:innei}(b) and Fig.~\ref{fig:nnnei}(a).
Vertices of degrees four and three are handled via combining the above degenerations. \end{proof}
\subsection{Toric action}\label{sec:toric} To prove Theorem \ref{genmainth}(iii) we show first that the action of $\mathcal H_{\bfG^{{\rm r}}}\times\mathcal H_{\bfG^{{\rm c}}}$ on $SL_n$ given by the formula $(H_1,H_2)X=H_1 X H_2$ defines a global toric action of $({\mathbb C}^*)^{k_{\bfG^{{\rm r}}}+k_{\bfG^{{\rm c}}}}$ on ${\mathcal C}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$. In order to show this we first check that the right hand sides of all exchange relations in one cluster are semi-invariants of this action. This statement can be expressed as follows.
\begin{lemma}\label{rlsemi} Let $f_{ij}(X)f_{ij}'(X)=M(X)$ be an exchange relation in the initial cluster, then $M(H_1XH_2)=\chi_L^M(H_1)M(X)\chi_R^M(H_2)$, where $\chi_L^M$ and $\chi_R^M$ are left and right multiplicative characters of $\mathcal H_{\bfG^{{\rm r}}}\times\mathcal H_{\bfG^{{\rm c}}}$ depending on $M$. \end{lemma}
\begin{proof} Notice first that all cluster variables in the initial cluster are semi-invariants of the action of $\mathcal H_{\bfG^{{\rm r}}}\times\mathcal H_{\bfG^{{\rm c}}}$. Indeed, recall that by \eqref{f_ij_gen},~\eqref{twof_ii} any cluster variable $f_{ij}$ in the initial cluster is a minor of a matrix ${\mathcal L}$ of size $N=N({\mathcal L})$. Clearly, minors are semi-invariant of the left-right action of the torus $\operatorname{Diag}_{N}\times \operatorname{Diag}_{N}$ on $\operatorname{Mat}_N$, where $\operatorname{Diag}_N$ is the group of invertible diagonal $N\times N$ matrices. We construct now two injective homomorphisms $r:\mathcal H_{\bfG^{{\rm r}}}\to \operatorname{Diag}_{N}\times \operatorname{Diag}_N$ and $c_N:\mathcal H_{\bfG^{{\rm r}}}\to \operatorname{Diag}_{N}\times \operatorname{Diag}_N$ such that the homomorphism $(r,c):\mathcal H_{\bfG^{{\rm r}}}\times\mathcal H_{\bfG^{{\rm c}}}\to \operatorname{Diag}_{N}\times \operatorname{Diag}_{N}$ given by $(r,c)(H_1,H_2)=r(H_1)\cdot c(H_2)$ extends the left-right action of $\mathcal H_{\bfG^{{\rm r}}}\times\mathcal H_{\bfG^{{\rm c}}}$ on $SL_n$ to an action on $\operatorname{Mat}_N$. Note that $\operatorname{Diag}_{N}\times \operatorname{Diag}_{N}$ is a commutative group, so $(r,c)$ is well-defined.
We describe first the construction of the homomorphism $r$. Let $\Delta$ be a nontrivial row $X$-run, and $\bar\Delta={\gamma^\er}(\Delta)$ be the corresponding row $Y$-run. Recall that $\mathcal H_{\bfG^{{\rm r}}}=\exp\mathfrak h_{\bfG^{{\rm r}}}$. Consequently, it follows from \eqref{smalltorus} that for any fixed $T\in\mathcal H_{\bfG^{{\rm r}}}$ there exists a constant $g^{\rm r}_{\Delta}(T)\in {\mathbb C}^*$ such that for any pair of corresponding indices $i\in\Delta$ and $j\in\bar\Delta$ one has $T_{jj}=g^{\rm r}_\Delta(T)\cdot T_{ii}$. Clearly, $g^{\rm r}_\Delta$ is a multiplicative character of $\mathcal H_{\bfG^{{\rm r}}}$.
Fix a pair of blocks $X_{I_t}^{J_t}$ and $Y_{\bar I_t}^{\bar J_t}$ in ${\mathcal L}$. Let $\Delta_t$ be the row $X$-run corresponding to $\Phi_t$, then we put $g_t^{\rm r}=g_{\Delta_t}^{\rm r}$ and define a matrix $A_t^{{\rm r}}(T)\in \operatorname{Diag}_N$ such that its entry $(j,j)$ equals $g_t^{\rm r}(T)$ for $j\in \cup_{i=1}^{t-1}(K_i\cup\bar K_i)\cup(K_t\setminus\Phi_t)$ and~$1$ otherwise, and a matrix $B_t^{{\rm r}}(T)\in \operatorname{Diag}_N$ such that its entry $(j,j)$ equals $\left(g_t^{\rm r}(T)\right)^{-1}$ for $j\in \cup_{i=1}^{t-1}(L_i\cup\bar L_i)\cup L_t$ and~$1$ otherwise, see Fig.~\ref{fig:ladder}.
Put $A^{\rm r}(T)=\prod_{t=1}^s A_t^{\rm r}(T)$ and $B^{\rm r}(T)=\prod_{t=1}^s B_t^{\rm r}(T)$. Finally, for any $j\in [1,N]$ define $\zeta^{\rm r}(j)$ as the image of $j$ under the identification of $\bar K_t$ and $\bar I_t$ if $j\in \bar K_t$ and as the image of $j$ under the identification of $K_t$ and $I_t$ if $j\in K_t\setminus\Phi_t$, and put $C^{\rm r}(T)=\operatorname{diag}(T_{\zeta^{\rm r}(j),\zeta^{\rm r}(j)})_{j=1}^N$. Then, similarly to the proof of Lemma \ref{partrace}, one obtains ${\mathcal L}(T X,T Y)=A^{\rm r}(T)C^{\rm r}(T){\mathcal L}(X,Y)B^{\rm r}(T)$, and hence $r:T\mapsto (A^{\rm r}(T)C^{\rm r}(T),B^{\rm r}(T))$ is the desired homomorphism.
The construction of the homomorphism $c$ is similar, with $g_t^{\rm c}$ defined by the column $X$-run corresponding to $\Psi_t$, $A_t^{\rm c}(T)$ having $g_t^{\rm c}(T)$ as the entry $(j,j)$ for $j\in \cup_{i=1}^{t-1}(L_i\cup\bar L_i)\setminus\Psi_t$ and 1 otherwise, $B_t^{\rm c}(T)$ having $(g_t^{\rm c}(T))^{-1}$ as the entry $(j,j)$ for $j\in \cup_{i=1}^{t}(K_i\cup\bar K_i)$ and 1 otherwise, $A^{\rm c}(T)=\prod_{t=1}^s A_t^{\rm c}(T)$, $B^{\rm c}(T)=\prod_{t=1}^s B_t^{\rm c}(T)$, and $C^{\rm c}(T)=\operatorname{diag}(T_{\zeta^{\rm c}(j),\zeta^{\rm c}(j)})_{j=1}^N$, where $\zeta^{\rm c}(j)$ is the image of $j$ under the identification of $L_t$ and $J_t$ if $j\in L_t$, and the image of $j$ under the identification of $\bar L_t$ and $\bar J_t$ if $j\in \bar L_t\setminus\Psi_{t+1}$. Consequently, the desired homomorphism is given by $C:T\mapsto (A^{\rm c}(T),B^{\rm c}(T)C^{\rm c}(T))$.
We thus see that any minor $P$ of ${\mathcal L}$ is a semi-invariant of the left-right action of $\mathcal H_{\bfG^{{\rm r}}}\times\mathcal H_{\bfG^{{\rm c}}}$ on $SL_n$, and we can define multiplicative characters $\chi_L^P$ and $\chi_R^P$ as the products of the corresponding minors of $A^{\rm r}$, $A^{\rm c}$ and $C^{\rm r}$, or $B^{\rm r}$, $B^{\rm c}$ and $C^{\rm c}$, respectively.
To prove the lemma, we consider first the most general case when the degree of the vertex $(i,j)$ is 6. Then, borrowing notation from the proof of Theorem \ref{regneighbors}, \[
M(X)=\tilde f_{i^1 j^1}(X) f_{i^2 j^2}(X)\tilde f_{i^4 j^4}(X)+f_{i^1 j^1}(X)\tilde f_{i^3 j^3}(X) f_{i^4 j^4}(X). \]
It follows from \eqref{firstdj} that $\chi^{\tilde f_{i^1 j^1}}+\chi^{\tilde f_{i^2 j^2}}=\chi^{f_{i^1 j^1}}+\chi^{\det(\bar A_{\hat 1}^{\hat 2})}$, where $\chi$ means $\chi_L$ or $\chi_R$. Similarly, it follows from \eqref{secdj} that
$\chi^{\tilde f_{i^4 j^4}}+\chi^{\det(\bar A_{\hat 1}^{\hat 2})}=\chi^{f_{i^4 j^4}}+\chi^{\tilde f_{i^3 j^3}}$.
Adding to both sides of the first equality $\chi^{\tilde f_{i^4 j^4}}$, to the both sides of the second equality
$\chi^{f_{i^1 j^1}}$ and adding these two equations together we obtain
\[ \chi^{\tilde f_{i^1 j^1}}+\chi^{\tilde f_{i^2 j^2}}+\chi^{\tilde f_{i^4 j^4}}=\chi^{f_{i^1 j^1}}+\chi^{\tilde f_{i^3 j^3}}+ \chi^{ f_{i^4 j^4}}=\chi^M, \] which proves the assertion of the lemma.
Other cases are obtained from the general case by the same specializations (setting one or more functions above to be $1$) that were used in the proof of Theorem \ref{regneighbors} above. This concludes the proof of the lemma.
\end{proof}
To complete the proof we have to show that any toric action on ${\mathcal C}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ can be obtained in this way. To prove this claim, we first note that the dimension of $\mathcal H_{\bfG^{{\rm r}}}$ equals $k_{\bfG^{{\rm r}}}$, and the dimension of $\mathcal H_{\bfG^{{\rm c}}}$ equals $k_{\bfG^{{\rm c}}}$. Consequently, the construction of Lemma \ref{rlsemi} produces $k_{\bfG^{{\rm r}}}+k_{\bfG^{{\rm c}}}$ weight vectors that lie in the kernel of the exchange matrix corresponding to $Q_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$, see \cite[Lemma 5.3]{GSVb}. Assume that there exists a vanishing nontrivial linear combination of these weight vectors; this would mean that all cluster variables remain invariant under the toric action induced by a nontrivial right-left action of $\mathcal H_{\bfG^{{\rm r}}}\times\mathcal H_{\bfG^{{\rm c}}}$ on $SL_n$. However, by Theorem \ref{laurent} below, every matrix entry of the initial matrix in $SL_n$ can be written as a Laurent polynomial in the cluster variables of the initial cluster. Hence, a generic matrix remains invariant under this nontrivial right-left action on $SL_n$, a contradiction. Note that the proof of Theorem \ref{laurent} does not use the results of Section \ref{sec:toric}.
\subsection{Proof of Proposition \ref{frozen}} (i) We will focus on the behavior of $\det{\mathcal L}(X,Y)$ under the right action of ${\mathcal D}_-={\mathcal D}_-^{{\rm c}}$. The left action of ${\mathcal D}_-^{{\rm r}}$ can be treated in a similar way. In fact, we will show that $\det{\mathcal L}(X,Y)$ is a semi-invariant of the right action of a larger subgroup of $D(GL_n)$. Let ${\mathcal P}_\pm$ be the parabolic subgroups in $SL_n$ that correspond to parabolic subalgebras \eqref{parabolics}, and let $\hat {\mathcal P}_\pm$ be the corresponding parabolic subgroups in $GL_n$. Elements of $\hat{\mathcal P}_+$ (respectively, $\hat{\mathcal P}_-$) are block upper (respectively, lower) invertible triangular matrices whose square diagonal blocks correspond to column $X$-runs (respectively, column $Y$-runs).
It follows from \eqref{d_-} that ${\mathcal D}_-$ is contained in a subgroup $\tilde{\mathcal D}_-$ of $\hat{\mathcal P}_+\times\hat{\mathcal P}_-$ defined by the property that every square diagonal block in the first component determined by a nontrivial column $X$-run $\Delta$ coincides with the square diagonal block in the second component determined by the corresponding nontrivial column $Y$-run. For $g=(g_1,g_2)\in \tilde{\mathcal D}_-$, consider the transformation of ${\mathcal L}(X,Y)$ under the action $(X,Y)\mapsto (X,Y)\cdot g$, in particular the transformation of the block column $L_t \cup \bar L_{t-1}$ as depicted in Fig.~\ref{fig:ladder}. In dealing with the block column we only need to remember that $(g_1,g_2)$ can be written as \[ (g_1,g_2) =\left ( \begin{bmatrix} A_{11} &A_{12} & A_{13}\\ 0& C & A_{23}\\ 0 & 0 & A_{33} \end{bmatrix} ,
\begin{bmatrix} B_{11} & 0 & 0\\ B_{21} & C & 0\\ B_{31}& B_{32}& B_{33} \end{bmatrix} \right ), \] where $A_{11}, A_{33}, B_{11} , B_{33}$ and $C$ are invertible and $C$ occupies rows and columns labeled by $\Delta(\beta_t)$ in $g_1$ and rows and columns labeled by $\bar\Delta(\bar\beta_{t-1})$ in $g_2$ (recall that both these runs correspond to $\Psi_t$). Then the effect of the transformation $(X,Y)\mapsto (X,Y)\cdot g$ on the block column is that it is multiplied on the right by an invertible matrix \[ \begin{bmatrix} A_{11} & A_{12} & 0\\ 0 & C & 0\\ 0 & B_{32}& B_{33} \end{bmatrix}. \] The cumulative effect on ${\mathcal L}(X,Y)$ is that it is transformed via a multiplication on the right by an invertible block diagonal matrix with blocks as above, and therefore $\det{\mathcal L}(X,Y)$ is transformed via a multiplication by the determinant of this matrix. The latter, being a product of powers of determinants of diagonal blocks of $g_1$ and $g_2$, is a character of $\tilde{\mathcal D}_-$, which proves the statement.
(ii) The claim follows from a more general statement: $\det{\mathcal L}(X,Y)$ is log-canonical with all matrix entries $x_{ij}$, $y_{ij}$ with respect to the Poisson bracket \eqref{sklyadoublegen} which, in our situation, takes the form \eqref{bracket}. Semi-invariance of $\det{\mathcal L}(X,Y)$ described in part (i) above, together with the fact that subalgebras ${\mathfrak d}_-={\mathfrak d}_-^r$ and ${\mathfrak d}_-'={\mathfrak d}_-^c$ are isotropic with respect to the bilinear form $\langle\langle \ , \ \rangle\rangle$ implies \[ {\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^L f\in {\mathfrak d}_- \dot{+} \left ({\mathfrak d}_+ \cap \mathfrak h\oplus \mathfrak h\right ),\qquad {\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R f\in {\mathfrak d}'_- \dot{+} \left ({\mathfrak d}_+ \cap \mathfrak h\oplus \mathfrak h\right ) \] for $f=\log\det{\mathcal L}(X,Y)$. This means that in \eqref{sklyadoublegen} \[ R_D ( {\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^L f) = - {\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^L f + \pi_{{\mathfrak d}_+}\left ({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^L f \right )_0,\qquad R'_D ( {\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R f) = - {\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R f + \pi'_{{\mathfrak d}_+}\left ({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R f \right )_0, \] where $\left (\ \right)_0$ denotes the natural projection to $D(\mathfrak h)=\mathfrak h \oplus \mathfrak h$ and $\pi_{{\mathfrak d}_+}, \pi'_{{\mathfrak d}_+}$ are projections to ${\mathfrak d}_+$ along ${\mathfrak d}_-,{\mathfrak d}'_-$ respectively. Due to the invariance of $\langle\langle \ , \ \rangle\rangle$, \eqref{sklyadoublegen} then reduces to \[ \{f,\varphi\}^D_{r,r'} = \frac{1}{2}\left (\langle\langle \pi_{{\mathfrak d}_+}({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^L f)_0, \left ({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace{^L} \varphi \right )_0\rangle\rangle - \langle\langle \pi'_{{\mathfrak d}_+}({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R f)_0, \left ({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R \varphi \right )_0\rangle\rangle \right) \] for any $\varphi=\varphi(X,Y)$.
Let now $\varphi(X,Y) = \log x_{ij}$. Then $\left ({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace{^L} \varphi \right )_0 = \left (e_{jj}, 0\right )$, $\left ({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace{^R} \varphi \right )_0 = \left (e_{ii}, 0\right )$. Thus, to prove the desired claim we need to show that $\pi_{{\mathfrak d}_+}({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^L f)_0$ and $\pi'_{{\mathfrak d}_+}({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R f)_0$ do not depend on $X,Y$. To this end, we first recall an explicit formula for $\pi_{{\mathfrak d}_+}$: \[ \pi_{{\mathfrak d}_+}(\xi,\eta) = \left (\xi - R_+(\xi -\eta) ,\xi - R_+(\xi -\eta) \right ), \] which can be easily derived using the property $R_+ - R_- = {\operatorname {Id}}$ satisfied by R-matrices \eqref{r-matrix}. Since in our situation the left gradient ${\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^L f$ computed with respect to $\langle\langle \ , \ \rangle\rangle$ is equal to $ \left (\nabla_X f \cdot X, - \nabla_Y f\cdot Y \right )$, we conclude that components of $\pi_{{\mathfrak d}_+}({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^L f)_0$ are equal to $\left(\nabla_X f \cdot X - R_+\left (E_L f \right ) \right)_0$, where $\left (\ \right)_0$ now means the projection to the diagonal in $\mathfrak g\mathfrak l_n$. By \eqref{term1left}, \eqref{for_frozen}, \eqref{R0}, \begin{multline*} \left(\nabla_X f \cdot X - R_+\left (E_L f \right ) \right)_0 = \frac 1 2 \left (-\frac {1}{1-\gamma} \left (\xi_L f\right )_0 + \frac{1} {1 - \gamma^*}\left (\eta_L f\right )_0\right ) \\ + \frac 1 n \left(\operatorname{Tr}(E_L f){\mathbf S} - \operatorname{Tr} \left((E_L f) {\mathbf S}\right )\mathbf 1\right). \end{multline*} By \eqref{infinv2}, Corollary \ref{traces} and \eqref{traceELS}, the right hand side above is constant. The constancy of $\pi'_{{\mathfrak d}_+}({\raisebox{2pt}{$\bigtriangledown$}}\negthinspace^R f)_0$ and the case of $\varphi(X,Y) = \log y_{ij}$ can be treated similarly. This completes the proof.
\section{Proof of Theorem \ref{genmainth}(ii)} \label{sec:induction}
As it was explained above in Section \ref{outline}, we have to prove the following statement.
\begin{theorem} \label{laurent} Every matrix entry can be written as a Laurent polynomial in the initial cluster $F_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ and in any cluster adjacent to it. \end{theorem}
Below we implement the strategy of the proof outlined in Section \ref{outline}.
\subsection{Proof of Theorem \ref{prototype} and its analogs}\label{matrixmaps} Given an aperiodic pair $(\bfG^{{\rm r}}, \bfG^{{\rm c}})$ and a non-trivial row $X$-run $\Delta^{{\rm r}}$, we want to explore the relation between cluster structures ${\mathcal C}={\mathcal C}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ and $\tilde{\mathcal C}={\mathcal C}_{\tilde\bfG^{{\rm r}},\bfG^{{\rm c}}}$, where $\tilde\bfG^{{\rm r}}=\tilde\bfG^{{\rm r}}(\overrightarrow{\Delta}^{\rm r})$ is obtained by deletion of the rightmost root in $\Delta^{\rm r}$ and its image in $\gamma(\Delta^{\rm r})$. Note that the pair $(\tilde\bfG^{{\rm r}}(\overrightarrow{\Delta}^{\rm r}),\bfG^{{\rm c}})$ remains aperiodic.
Assume that $\Delta^{\rm r}$ is $[p+1, p+k]$, and the corresponding row $Y$-run $\gamma(\Delta^{\rm r})$ is $[q+1, q+k]$. Then, in considering $(\tilde\bfG^{{\rm r}}(\overrightarrow{\Delta}^{\rm r}),\bfG^{{\rm c}})$, we replace the former one with $[p+1, p+k-1]$, and the latter one with $[q+1, q+k-1]$. Besides, a trivial row $X$-run $[p+k,p+k]$ and a trivial row $Y$-run $[q+k,q+k]$ are added. The rest of row $X$- and $Y$-runs as well as all column $X$- and $Y$-runs remain unchanged. In what follows, parameters $p$, $q$ and $k$ are assumed to be fixed.
We say that a matrix ${\mathcal L}\in {\mathbf L}$ is $r$-{\em piercing\/} for an $r\in [2,k]$ if $\mathcal J(p+r,1)=({\mathcal L},s_r)$ for some $s_r\in [1,N({\mathcal L})]$. Note that two distinct matrices cannot be simultaneously $r$-piercing. On the other hand, a matrix can be $r$-piercing simultaneously for several distinct values of $r$; the set of all such values is called the {\it piercing set\/} of ${\mathcal L}$. If a piercing set consists of $r_1,\dots,r_l$, we will assume that $s_{r_1}>\dots > s_{r_l}$. The subset of all matrices in ${\mathbf L}$ that are not $r$-piercing for any $r\in [2,k]$ is denoted ${\mathbf L}_\varnothing$.
Let $\tilde{\mathbf L}={\mathbf L}_{\tilde\bfG^{{\rm r}} (\overrightarrow{\Delta}^{\rm r}),\bfG^{{\rm c}}}$, $\tilde\mathcal J=\mathcal J_{\tilde\bfG^{{\rm r}} (\overrightarrow{\Delta}^{\rm r}),\bfG^{{\rm c}}}$, and let the functions ${\tilde{\tt f}}_{ij}(X,Y)$ and $\tilde f_{ij}(X)$ be defined via the same expressions as ${{\tt f}}_{ij}(X,Y)$ and $f_{ij}(X)$ with ${\mathbf L}$ and $\mathcal J$ replaced by $\tilde{\mathbf L}$ and $\tilde\mathcal J$. It is convenient to restate Theorem \ref{prototype} in more detail as follows.
\begin{theorem} \label{matrixmap1} Let $Z=(z_{ij})$ be an $n\times n$ matrix. Then there exists a unipotent upper triangular $n\times n$ matrix $U(Z)$ whose entries are rational functions in $z_{ij}$ with denominators equal to powers of $\tilde f_{p+k,1}(Z)$ such that for $X=U(Z) Z$ and for any $i,j\in [1,n]$, $$ f_{ij} (X) =\begin{cases} \tilde f_{ij} (Z)\tilde f_{p+k,1} (Z)\quad &\text{if $\mathcal J (i,j) = ({\mathcal L}^*, s)$ and $s< s_k$},\\ \tilde f_{ij} (Z)\quad &\text{otherwise}, \end{cases} $$ where ${\mathcal L}^*$ is the $k$-piercing matrix in ${\mathbf L}$. \end{theorem}
\begin{proof} In what follows we assume that $i\ne j$, since for $i=j$ the claim of the theorem is trivial.
For any ${\mathcal L}(X,Y) \in {\mathbf L}$ define $\tilde{\mathcal L}(X,Y)$ obtained from ${\mathcal L}(X,Y)$ by removing the last row from every building block of the form $Y_{[1,q+k]}^{\bar J}$. In particular, if ${\mathcal L}(X,Y)$ does not have building blocks like that then $\tilde{\mathcal L}(X,Y)={\mathcal L}(X,Y)$.
Note that all matrices $\tilde {\mathcal L}$ defined above are irreducible except for the one obtained from the $k$-piercing matrix ${\mathcal L}^*$. The corresponding matrix $\tilde{\mathcal L}^*$ has two irreducible diagonal blocks $\tilde{\mathcal L}_{1}^*$, $\tilde{\mathcal L}_{2}^*$ of sizes $s_k-1$ and $N({\mathcal L}^*) - s_k+1$, respectively. As was already noted in Section \ref{outline}, all maximal alternating paths in $G_{{\Gamma^{\rm r}},{\Gamma^{\rm c}}}$ are preserved in $G_{\tilde\bfG^{{\rm r}} (\overrightarrow{\Delta}^{\rm r}),\bfG^{{\rm c}}}$ except for the path that goes through the directed inclined edge $(p+k-1)\to (q+k-1)$. The latter one is split into two: the initial segment up to the vertex $p+k-1$ and the closing segment starting with the vertex $q+k-1$. Consequently, $\tilde{{\mathbf L}}=\{\tilde {\mathcal L}{:\ } {\mathcal L}\in {\mathbf L}, {\mathcal L}\ne {\mathcal L}^*\}\cup \{\tilde{\mathcal L}_{1}^*,\tilde{\mathcal L}_{2}^*\}$.
Further, if $\mathcal J (i,j) = ({\mathcal L}, s)$ and ${\mathcal L}\ne {\mathcal L}^*$ then $\tilde{\mathcal J}(i,j) =(\tilde{\mathcal L}, s)$. Furthermore, if ${\mathcal L} \in {\mathbf L}_\varnothing$ then additionally ${{\tt f}}_{ij}(X,Y)$ and ${\tilde {{\tt f}}}_{ij}(X,Y)$ coincide. However, if $\mathcal J (i,j) = ({\mathcal L}^*, s)$ then \begin{equation*} \tilde{\mathcal J}(i,j) =\begin{cases} (\tilde{\mathcal L}_{1}^*, s) \quad& \text{for $s=s(i,j) < s_k$}, \\
(\tilde{\mathcal L}_{2}^*, s - s_k +1) \quad& \text{for $s=s(i,j) \geq s_k$}. \end{cases} \end{equation*}
It follows from the above discussion that the claim of the theorem is an immediate corollary of the equalities \begin{equation}\label{formeri}
\det {\mathcal L}(X,X)_{[s, N({\mathcal L})]}^{[s, N({\mathcal L})]} = \det \tilde {\mathcal L}(Z,Z)_{[s, N({\mathcal L})]}^{[s, N({\mathcal L})]} \end{equation} for any ${\mathcal L} \in \mathbf L$ and $s\in [1, N({\mathcal L})]$.
To prove \eqref{formeri}, we select a particular "shape" for $U(Z)$. Let \begin{equation} \label{mm0} U_0=U_0(Z) = \mathbf 1_n + \sum_{\varkappa=1}^{k-1} \alpha_\varkappa(Z) e_{q+\varkappa,q+k}, \end{equation} where $\alpha_\varkappa(Z)$ are coefficients to be determined, and \begin{equation} \label{mm1} U=U(Z) ={\stackrel {\leftarrow} {\prod}_{i\geq 0}}\exp(i{\gamma^\er}) (U_0(Z)). \end{equation} Due to the nilpotency of ${\gamma^\er}$ on $\mathfrak n_+$, the product above is finite. Clearly, if $\alpha_\varkappa(Z)$ are polynomials in $z_{ij}$ divided by a power of $\tilde f_{p+k,1}$ then the same is true for the entries of $U(Z)$.
The invariance property \eqref{2.1} implies that for every $(i,j)$, $$ {{\tt f}}_{ij}(U Z,U Z) = {{\tt f}}_{ij}(Z,\exp({\gamma^\er})(U^{-1})U Z) = {{\tt f}}_{ij} (Z,U_0 Z); $$ here the second equality follows from \eqref{mm1}. Thus, to prove \eqref{formeri} for $X= U Z$ it is sufficient to select parameters $\alpha_\varkappa(Z)$ in \eqref{mm0} in such a way that \begin{equation} \label{mm2}
\det {\mathcal L}(Z, U_0 Z)_{[s, N({\mathcal L})]}^{[s, N({\mathcal L})]} = \det \tilde {\mathcal L}(Z,Z)_{[s, N({\mathcal L})]}^{[s, N({\mathcal L})]}\ \end{equation}
for all ${\mathcal L} \in \mathbf L$ and $s\in [1, N({\mathcal L})]$.
Observe, that the equation above is satisfied for any choice of $\alpha_\varkappa$ if
${\mathcal L} \in {\mathbf L}_\varnothing$, that is, if ${\mathcal L}(X,Y)=\tilde {\mathcal L}(X,Y)$. Indeed, in this case any $Y$-block in ${\mathcal L}$ either does not contain any of the rows $q+1, \ldots, q+k$, or contains all of them but without an overlap with the $X$-block to the right. If the former is true, the block rows corresponding to this $Y$-block in ${\mathcal L}(Z, U_0 Z)$ and ${\mathcal L}(Z, Z)$ coincide, while if the latter is true, then the block of $k$ rows under consideration in ${\mathcal L}(Z, U_0 Z)$ is obtained from the corresponding block row of ${\mathcal L}(Z, Z)$ via left multiplication by a $k\times k$ unipotent upper triangular matrix $ \mathbf 1_k + \sum_{\varkappa=1}^{k-1} \alpha_\varkappa(Z) e_{\varkappa k}$, which does not affect trailing principal minors.
Let us now turn to matrices ${\mathcal L} \in {\mathbf L}\setminus{\mathbf L}_\varnothing$. In fact, the same reasoning as above shows that for any such matrix, the functions in the left hand side of \eqref{mm2}
do not change if ${\mathcal L}(Z, U_0 Z)$ is replaced by $\hat{\mathcal L}(Z,U_0 Z)$ obtained from ${\mathcal L}(Z,Z)$ via replacing every $Y$-block $Z_{[1,q+k]}^{\bar J}$ by $\left (U_0 Z\right )_{[1,q+k]}^{\bar J}$ and retaining all other $Y$-blocks $Z_{\bar I}^{\bar J}$. Therefore, in what follows we aim at proving \begin{equation} \label{mm2r}
\det \hat{\mathcal L}(Z, U_0 Z)_{[s, N({\mathcal L})]}^{[s, N({\mathcal L})]} = \det \tilde {\mathcal L}(Z,Z)_{[s, N({\mathcal L})]}^{[s, N({\mathcal L})]} \end{equation}
for all ${\mathcal L} \in {\mathbf L}\setminus{\mathbf L}_{\varnothing}$ and $s\in [1, N({\mathcal L})]$.
Assume that ${\mathcal L}={\mathcal L}(X,Y)$ is $r$-piercing, and so there exists $s_r \in [1, N({\mathcal L})]$ such that ${\mathcal L}(X,Y)_{s_r s_r} = x_{p+r,1}$; the $X$-block of ${\mathcal L}(X,Y)$ that contains the diagonal entry $(s_r,s_r)$ is denoted $X^{J^r}_{[p+1,n]}$. We can decompose $\hat{\mathcal L}=\hat{\mathcal L}(Z,U_0 Z)$ into blocks as follows: \begin{equation} \label{uglymatrix} \hat{\mathcal L}(Z,U_0 Z)=\begin{bmatrix} \hat A^r_1 & 0\\ \hat A^r_2 & \hat B^r_1\\ 0 & \hat B^r_2 \end{bmatrix}, \end{equation} where the sizes of block rows are $s_r-r$, $k$ and $N({\mathcal L})-s_r-k+r$, and the sizes of block columns are $s_r-1$ and $N({\mathcal L})-s_r+1$. Note that the blocks are given by \begin{equation*} \hat A^r_1=\begin{bmatrix} \ast & \ast\\ 0 & (U_0 Z)^{\bar J^r}_{[1,q]} \end{bmatrix}, \qquad \hat A^r_2= \begin{bmatrix} 0 & (U_0 Z)^{\bar J^r}_{[q+1,q+k]} \end{bmatrix} \end{equation*} and \begin{equation*} \hat B^r_1= \begin{bmatrix} Z^{J^r}_{[p+1,p+k]} & 0 \end{bmatrix}, \qquad \hat B^r_2=\begin{bmatrix} Z^{J^r}_{[p+k+1,n]} & 0\\ \ast & \ast \end{bmatrix}. \end{equation*}
It will be convenient to combine $\hat A^r_1$ and $\hat A^r_2$ into one $(s_r+k-r)\times(s_r-1)$ block $\hat A^r$, and $\hat B^r_1$ and $\hat B^r_2$ into one $\theta_r\times(\theta_r-r+1)$ block $\hat B^r$ with $\theta_r=N({\mathcal L})-s_r+r$. A similar decomposition into blocks of the same size for $\tilde{\mathcal L}=\tilde{\mathcal L}(Z,Z)$ contains blocks $\tilde A^r_1$, $\tilde A^r_2$, $\tilde B^r_1$ and $\tilde B^r_2$ that may be combined into $\tilde A^r$ and $\tilde B^r$, respectively; consequently, the last row of $\tilde A^r_2$ (and hence of $\tilde A^r$) is zero. Note that since exactly one matrix in ${\mathbf L}\setminus{\mathbf L}_\varnothing$ is $r$-piercing for any fixed $r$, notation $\hat A^r$, $\hat B^r$, and $\tilde A^r$, $\tilde B^r$ is unambiguous.
Denote the column set of the second block column in \eqref{uglymatrix} by $M_r$. Let \begin{equation} \label{alpha} \alpha_\varkappa (Z) = \frac{\det (\tilde {\mathcal L}^*)^{M_k}_{(M_k\setminus \{s_k\})\cup \{s_k + \varkappa-k \} }} {\det (\tilde{\mathcal L}^*)_{M_k}^{M_k}}, \quad \varkappa =1,\ldots, k; \end{equation} note that $\alpha_k =1$. We claim that $U_0(Z)$ given by \eqref{mm0} and \eqref{alpha} satisfies conditions \eqref{mm2r}. Note that the denominator in \eqref{alpha} equals $\tilde f_{p+k,1}(Z)$, and hence the denominators of the entries of ${\mathcal L}$ defined by \eqref{mm1} are powers of $\tilde f_{p+k,1}(Z)$.
Assume that the piercing set of ${\mathcal L}$ is $\{r_1,\dots,r_l\}$; additionally, set $s_{r_{l+1}}=1$. Recall that $Y$-blocks of the form $Z_{[1,q+k]}^{\bar J}$ do not appear in the columns $M_{r_1}$ in $\hat{\mathcal L}$, and hence \eqref{mm2r} is trivially satisfied for $ s \geq s_{r_1}$.
For $s_{r_2}\le s \le s_{r_1}-1$, we are in the situation covered by Lemma \ref{blockmatrix} (see Section below) with ${\mathcal M}=\hat{\mathcal L}^{M_{r_2}}_{M_{r_2}}$, $\tilde{\mathcal M}=\tilde{\mathcal L}^{M_{r_2}}_{M_{r_2}}$, $N=\theta_{r_2}-r_2+1$, $N_2=\theta_{r_1}-r_1+1$, and $k_1=r_1-1$. Condition (iii) in the lemma is satisfied trivially, since in this case $B=\tilde B$. Consequently, \eqref{mm2r} is satisfied if the parameters $\alpha_\varkappa=\alpha_{\varkappa}(Z)$ satisfy equations \begin{equation} \label{eqs_r} \sum_{\varkappa \in S} (-1)^{\varepsilon_{\varkappa S}} \alpha_\varkappa \det (\tilde B^{r_1})_{(S\setminus\{\varkappa \})\cup [k+1, \theta_{r_1}]}=0 \end {equation} for any $(k-r_1+2)$-element subset $S$ in $[1, k]$ such that $k\in S$, where \[ \varepsilon_{\varkappa S} = \# \{ i\in S : i > \varkappa\}. \]
If $l=1$, there are no other conditions on the parameters $\alpha_\varkappa$, since $s_{r_2}=1$. Otherwise, let $s_{r_3}\le s \le s_{r_2}-1$ and consider the block decomposition \eqref{uglymatrix} for $r=r_2$. We claim that the situation is now covered by Lemma \ref{blockmatrix} with ${\mathcal M}=\hat{\mathcal L}^{M_{r_3}}_{M_{r_3}}$, $\tilde{\mathcal M}=\tilde{\mathcal L}^{M_{r_3}}_{M_{r_3}}$, $N=\theta_{r_3}-r_3+1$, $N_2=\theta_{r_2}-r_2+1$, and $k_1=r_2-1$. To check condition (iii) in the lemma, we pick an arbitrary subset $T\subset [s_{r_2}-r_2+1,s_{r_2}-r_2+k]$ of size $k-r_2+1$ and apply Lemma \ref{blockmatrix} to matrices ${\mathcal M}=\hat{\mathcal L}^{T\cup M_{r_2}\setminus[s_{r_2},s_{r_2}-r_2+k]}_{M_{r_2}}$ and $\tilde{\mathcal M}=\tilde{\mathcal L}^{T\cup M_{r_2}\setminus[s_{r_2},s_{r_2}-r_2+k]}_{M_{r_2}}$ with parameters $N=\theta_{r_2}-r_2+1$, $N_2=\theta_{r_1}-r_1+1$, and $k_1=r_1-1$. It follows that the condition in question is guaranteed by the same equations \eqref{eqs_r}. Consequently, by Lemma~\ref{blockmatrix}, equations \eqref{mm2r} for $s_{r_3}\le s \le s_{r_2}-1$ are guaranteed by equations \eqref{eqs_r} with $r_1$ replaced by $r_2$.
Continuing in the same fashion, we conclude that if conditions \begin{equation} \label{eqs_rr} \sum_{\varkappa \in S} (-1)^{\varepsilon_{\varkappa S}} \alpha_\varkappa \det (\tilde B^{r})_{(S\setminus\{\varkappa \})\cup [k+1, \theta_{r}]}=0 \end {equation} are satisfied for any $r\in \{r_1,\dots,r_l\}$ and any $(k-r+2)$-element subset $S$ in $[1, k]$ containing $k$, then \eqref{mm2r} holds for any $s\in [1, N({\mathcal L})]$. It remains to show that \eqref{eqs_rr} are valid with $\alpha_\varkappa$ defined in \eqref{alpha}.
Rewrite \eqref{alpha} as \begin{equation} \label{alphaB} \alpha_\varkappa (Z) = \frac{ \det (\tilde B^k)_{\{\varkappa\}\cup [k+1,\theta_k]}} { \det (\tilde B^k)_{ [k, \theta_k]} }, \quad \varkappa =1,\ldots, k. \end{equation} If $r=k$, and hence ${\mathcal L}={\mathcal L}^*$, then every $S$ in \eqref{eqs_rr} is a two element set $\{ \varkappa, k\}$ with $\varkappa \in [1,k-1]$, $\varepsilon_{\varkappa S}=1$, $\varepsilon_{k S}=0$. Plugging \eqref{alphaB} into the left hand side of \eqref{eqs_rr} and clearing denominators we obtain two terms that differ only by sign and thus the claim follows.
For $r<k$, we need to evaluate \begin{equation} \label{uzhas} \sum_{\varkappa \in S} (-1)^{\varepsilon_{\varkappa S}} \det (\tilde B^k)_{\{\varkappa\}\cup [k+1, \theta_k]} \det (\tilde B^r)_{(S\setminus\{\varkappa \})\cup [k+1, \theta_r]}. \end {equation} Note that the blocks $Z_{[p+1, n]}^{J^k}$ and $Z_{[p+1, n]}^{J^r}$ have the same row set, and the exit point of the former lies below the exit point of the latter. Consequently, $J^k\subseteq J^r$, and the first of the blocks is a submatrix of the second one. Therefore, we find ourselves in a situation similar to the one discussed in Section \ref{case1} above while analyzing sequences \eqref{blocks} of blocks. Reasoning along the same lines, we either arrive at the cases (ii) and (iii) in Section \ref{case1}, and then \begin{equation} \label{compareB1} \tilde B^k = \left [ \begin{array}{ccc } U_1 & U_2 & 0\\ 0 & V_1 & V_2 \end{array} \right ], \qquad \tilde B^r = \left [ \begin{array}{@{}ccccc@{} } U_1& U_2 & U_3 & U_4 & 0\\ 0 &0 & 0 & W_1 & W_3 \end{array} \right ], \end{equation} where odd block columns and the second block row of $\tilde B^k$ and $\tilde B^r$ might be empty, or at the cases (i) and (iv) in Section \ref{case1}, and then \begin{equation} \label{compareB2} \tilde B^k = \left [ \begin{array}{cc } U_1 & 0\\ U_2 & 0\\ U_3 & 0\\ U_4 & V_1\\ 0 & V_2 \end{array} \right ],\qquad \tilde B^r =\left [ \begin{array}{cc } U_1 & 0\\ U_2 & W_1\\ 0 & W_2 \end{array} \right ], \end{equation} where odd block rows and the second block column of $\tilde B^k$ and $\tilde B^r$ might be empty. In particular, if $\tilde B^k$ is a submatrix of $\tilde B^r$ (cf.~case (iv) in Section \ref{case1}) then \eqref{compareB1} applies with an empty second block row and third block column in the expression for $\tilde B^k$. Similarly, if $\tilde B^r$ is a submatrix of $\tilde B^k$ (cf.~case (iii) in Section \ref{case1}) then \eqref{compareB2} applies with an empty second block column and third block row in the expression for $\tilde B^r$.
Suppose \eqref{compareB1} is the case. Define $\tau_4>\tau_3\geq \tau_2>\tau_1\geq\tau_0=0$ and $\sigma>0$ so that the size of the block $U_i$ equals $\sigma \times ( \tau_i - \tau_{i-1})$ for $1\leq i\leq 4$. Note that $\sigma \geq n-p \geq k$ and $\sigma > \tau_3$. We will use the Laplace expansion of the minors in \eqref{uzhas} with respect to the first block row: \begin{equation}\label{decomp} \begin{aligned}
\det (\tilde B^k)_{\{\varkappa\}\cup [k+1, \theta_k]} &
=\sum\limits_{\Theta} (-1)^{\varepsilon_\Theta} \det (\tilde B^k)_{\{\varkappa\}\cup [k+1, \sigma]}^{[1, \tau_1]\cup \Theta} \det (\tilde B^k)_{[\sigma+1, \theta_k]}^{\bar\Theta\cup [\tau_2 +1, \theta_k-k+1]}, \\
\det (\tilde B^r)_{(S\setminus\{\varkappa \})\cup [k+1, \theta_r]}&
= \sum\limits_{\Xi} (-1)^{\varepsilon_\Xi} \det (\tilde B^r)_{(S\setminus\{\varkappa \})\cup [k+1, \sigma]}^{[1, \tau_3]\cup \Xi} \det (\tilde B^r)_{[\sigma+1, \theta_r]}^{\bar\Xi\cup [\tau_4 +1, \theta_r-r+1]}. \end{aligned} \end{equation}
Here the first sum runs over all $\Theta \subset [\tau_1+1, \tau_2]$ such that $|\Theta|=\sigma - \tau_1-k+1$, and $\bar\Theta$ is the complement of $\Theta$ in $[\tau_1+1, \tau_2]$; the second sum runs over all
$\Xi \subset [\tau_3+1, \tau_4]$ such that $|\Xi|=\sigma - \tau_3-r+1$, and $\bar\Xi$ is the complement of $\Xi$ in $[\tau_3+1, \tau_4]$; $\varepsilon_\Theta$ and $\varepsilon_\Xi$ depend only on $\Theta$ and $\Xi$, respectively, and $[k+1,\sigma]$ is empty if $\sigma = k$. Plug \eqref{decomp} into \eqref{uzhas} and note that for any fixed pair $\Theta$, $\Xi$, the coefficient at $$ \det (\tilde B^k)_{[\sigma+1, \theta_k]}^{\bar\Theta\cup [\tau_2 +1, \theta_k-k+1]} \det (\tilde B^r)_{[\sigma+1, \theta_r]}^{\bar\Xi\cup [\tau_4 +1, \theta_r-r+1]} $$ is equal to \begin{equation} \label{uzhas?} (-1)^{\varepsilon_\Theta+\varepsilon_\Xi}\sum_{\varkappa \in S} (-1)^{\varepsilon_{\varkappa S}} \det (\tilde B^r)_{\{\varkappa\}\cup [k+1, \sigma]}^{[1, \tau_1]\cup \Theta}
\det (\tilde B^r)_{(S\setminus\{\varkappa \})\cup [k+1, \sigma]}^{[1, \tau_3]\cup T}, \end {equation} since the upper left $\sigma\times \tau_2$ blocks of $\tilde B^r$ and $\tilde B^k$ coincide. Observe that $[1, \tau_1]\cup\Theta \subset [1,\tau_3]$, and hence \eqref{uzhas?} is equal to the left-hand side of the Pl\"ucker relation \eqref{plu2} with $A=\tilde B^r$, $I=S$, $J=[k+1,\sigma]$, $L= [1, \tau_1]\cup\Theta$ and $M = \left ( [1, \tau_3]\cup T\right ) \setminus \left ([1, \tau_1]\cup\Theta\right )$. Thus \eqref{uzhas?} vanishes for any $\Theta$, $\Xi$, and so \eqref{uzhas} is zero in the case \eqref{compareB1}. The case \eqref{compareB2} can be treated similarly: using the Laplace expansion with respect to the first block column, one concludes that \eqref{uzhas} is zero. This proves that with $\alpha_\varkappa$ defined by \eqref{alpha}, all conditions \eqref{eqs_rr} are satisfied, and therefore \eqref{mm2r} is valid, which completes the proof of the theorem.
\end{proof}
As it was explained in Section \ref{outline}, we also need a version of Theorem \ref{prototype} relating ${\mathcal C}={\mathcal C}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ and $\tilde{\mathcal C}={\mathcal C}_{\tilde\bfG^{{\rm r}},\bfG^{{\rm c}}}$, where $\tilde\bfG^{{\rm r}}=\tilde\bfG^{{\rm r}}(\overleftarrow{\Delta}^{\rm r})$ is obtained by the deletion of the leftmost root in $\Delta^{\rm r}$. The treatment of this case follows the same strategy as above. Once again, we assume that the non-trivial row $X$-run that corresponds to $\Delta^{{\rm r}}\subset\Gamma^{{\rm r}}_1$ is $[p+1, p+k]$, and the corresponding row $Y$-run is $[q+1, q+k]$. This time, in considering $(\tilde\bfG^{{\rm r}},\bfG^{{\rm c}})$, we replace the former one with $[p+2, p+k]$, and the latter one with $[q+2, q+k]$, and add a trivial row $X$-run $[p+1,p+1]$ and a trivial row $Y$-run $[q+1,q+1]$. The rest of nontrivial row $X$- and $Y$-runs as well as all column $X$- and $Y$-runs remain unchanged. In what follows, parameters $p$, $q$ and $k$ are assumed to be fixed.
Let $\tilde{\mathbf L}={\mathbf L}_{\tilde\bfG^{{\rm r}}(\overleftarrow{\Delta}^{\rm r}),\bfG^{{\rm c}}}$, $\tilde\mathcal J=\mathcal J_{\tilde\bfG^{{\rm r}}(\overleftarrow{\Delta}^{\rm r}),\bfG^{{\rm c}}}$, and let the functions ${\tt \tilde f}_{ij}(X,Y)$ and $\tilde f_{ij}(X)$ be defined via the same expressions as ${\tt f}_{ij}(X,Y)$ and $f_{ij}(X)$ with ${\mathbf L}$ and $\mathcal J$ replaced by $\tilde{\mathbf L}$ and $\tilde\mathcal J$. A suitable version of Theorem \ref{prototype} can be stated as follows.
\begin{theorem} \label{matrixmap2} Let $Z=(z_{ij})$ be an $n\times n$ matrix. Then there exists a unipotent upper triangular $n\times n$ matrix $U(Z)$ whose entries are rational functions in $z_{ij}$ with denominators equal to powers of $\tilde f_{p+2,1}(Z)$ such that for $X=U(Z) Z$ and for any $i,j\in [1,n]$, $$ f_{ij} (X) =\begin{cases} {\tilde f_{ij} (Z)}{\tilde f_{p+2,1} (Z)}\quad&\text{if $\mathcal J (i,j) = ({\mathcal L}^*, s)$ and $s< s_2$},\\ \tilde f_{ij} (Z)\quad &\text{otherwise}, \end{cases} $$ where ${\mathcal L}^*\in{\mathbf L}$ is the $2$-piercing matrix in ${\mathbf L}$. \end{theorem}
\begin{proof} Our approach is similar to that in the proof of Theorem \ref{matrixmap1}.
For any ${\mathcal L}(X,Y) \in {\mathbf L}$ define $\tilde{\mathcal L}(X,Y)$ obtained from ${\mathcal L}(X,Y)$ by removing the first row from every building block of the form $X_{[p+1,N]}^{J}$. In particular, if ${\mathcal L}(X,Y)$ does not have building blocks like that then $\tilde{\mathcal L}(X,Y)={\mathcal L}(X,Y)$.
Similarly to the previous case, all matrices $\tilde {\mathcal L}$ defined above are irreducible except for the one obtained from the $2$-piercing matrix ${\mathcal L}^*$. The corresponding matrix $\tilde{\mathcal L}^*$ has two irreducible diagonal blocks $\tilde{\mathcal L}_{1}^*$, $\tilde{\mathcal L}_{2}^*$ of sizes $s_2-1$ and $N({\mathcal L}^*) - s_2+1$, respectively. As was already noted in Section \ref{outline}, all maximal alternating paths in $G_{{\Gamma^{\rm r}},{\Gamma^{\rm c}}}$ are preserved in $G_{\tilde\bfG^{{\rm r}} (\overleftarrow{\Delta}^{\rm r}),\bfG^{{\rm c}}}$ except for the path that goes through the directed inclined edge $(p+1)\to (q+1)$. The latter one is split into two: the initial segment up to the vertex $p+1$ and the closing segment starting with the vertex $q+1$. Consequently, $\tilde{{\mathbf L}}=\{\tilde {\mathcal L}{:\ } {\mathcal L}\in {\mathbf L}, {\mathcal L}\ne {\mathcal L}^*\}\cup \{\tilde{\mathcal L}_{1}^*,\tilde{\mathcal L}_{2}^*\}$.
As before, if $\mathcal J (i,j) = ({\mathcal L}, s)$ and ${\mathcal L}\ne {\mathcal L}^*$ then $\tilde{\mathcal J}(i,j) =(\tilde{\mathcal L}, s)$. Furthermore, if ${\mathcal L} \in {\mathbf L}_\varnothing$ then additionally ${\tt f}_{ij}(X,Y)$ and ${\tilde {\tt f}}_{ij}(X,Y)$ coincide. However, if $\mathcal J (i,j) = ({\mathcal L}^*, s)$ then \[ \tilde{\mathcal J}(i,j) = \begin{cases} (\tilde{\mathcal L}_{1}^*, s)\quad& \text{for $s=s(i,j) < s_2$},\\
(\tilde{\mathcal L}_{2}^*, s - s_2 +1)\quad &\text {for $s=s(i,j) \geq s_2$}. \end{cases} \]
It follows from the above discussion that the claim of the theorem is an immediate corollary of the equalities \eqref{formeri} for any ${\mathcal L} \in {\mathbf L}$ and $s\in [1, N({\mathcal L})]$.
Let \begin{equation}\label{mm02} U_0(Z) = \mathbf 1_n + \sum_{\varkappa=2}^{k} \alpha_\varkappa e_{q+1,q+\varkappa} \end{equation} and \begin{equation*} U(Z) ={\stackrel {\leftarrow} {\prod}_{t\geq 0}}\gamma^t (U_0(Z)). \end{equation*} As before, the invariance property \eqref{2.1} allows to reduce the problem to selecting parameters $\alpha_\varkappa=\alpha_\varkappa(Z)$ such that the analog of \eqref{mm2} with $U_0(Z)$ given by \eqref{mm02} is satisfied for all ${\mathcal L} \in {\mathbf L}$ and $s\in [1, N({\mathcal L})]$.
Once again, this relation is satisfied for any choice of $\alpha_\varkappa$ if
${\mathcal L} \in {\mathbf L}_\varnothing$, that is, if ${\mathcal L}(X,Y)=\tilde {\mathcal L}(X,Y)$, while for matrices ${\mathcal L} \in {\mathbf L}\setminus{\mathbf L}_\varnothing$ one has to replace ${\mathcal L}(Z, U_0 Z)$ by the matrix $\hat{\mathcal L}(Z,U_0 Z)$ similar to the one defined in the proof of Theorem~\ref{matrixmap1}. Therefore, in what follows we aim at proving the analog of \eqref{mm2r}
for all ${\mathcal L} \in {\mathbf L}\setminus{\mathbf L}_{\varnothing}$ and $s\in [1, N({\mathcal L})]$.
We can again use decomposition \eqref{uglymatrix} for $\hat{\mathcal L}$ and $\tilde {\mathcal L}$, except that now $\tilde B^r_1$ is obtained from $\hat B^r_1$ by replacing the first row with zeros, whereas the last row of $\tilde A_2^r$ remains as is, unlike the previous case. Consequently, for $s\geq s_{r_1}$ the analog of \eqref{mm2r} is satisfied trivially.
For $s_{r_2}\le s \le s_{r_1}-1$, we are in the situation covered by Lemma \ref{blockmatrix2} with ${\mathcal M}=\hat{\mathcal L}^{M_{r_2}}_{M_{r_2}}$, $\tilde{\mathcal M}=\tilde{\mathcal L}^{M_{r_2}}_{M_{r_2}}$, $N=\theta_{r_2}-r_2+1$, $N_2=\theta_{r_1} -{r_1}+1$, and $k_1=r_1-1$. Condition (iv) in the lemma is satisfied trivially, since in this case $B_{[N_1-k_1+2,N]}=\tilde B_{[N_1-k_1+2,N]}$. Consequently, the analog of \eqref{mm2r} holds true if the parameters $\alpha_\varkappa=\alpha_{\varkappa}(Z)$ satisfy equations \begin{equation} \label{eqs_r2} \sum_{\varkappa \in [1,k]\setminus S} (-1)^{\varepsilon_{\varkappa S}} \alpha_\varkappa \det (\hat B^{r_1})_{S\cup\{\varkappa \}\cup [k+1, \theta_{r_1}]}=0 \end {equation} for any $(k-r_1)$-element subset $S$ in $[2, k]$.
Continuing in the same way as in the proof of Theorem~\ref{matrixmap1} and using Lemma~\ref{blockmatrix2} instead of Lemma~\ref{blockmatrix}, we conclude that if conditions \begin{equation} \label{condition_lemma22} \sum_{\varkappa \in [1,k]\setminus S} (-1)^{\varepsilon_{\varkappa S}} \alpha_\varkappa \det (\hat B^r)_{S\cup\{\varkappa \}\cup [k+1, \theta_r]} = 0 \end {equation} are satisfied for any $r\in \{r_1,\dots,r_l\}$ and any $(k-r)$-element subset $S$ in $[2, k]$, then the analog of \eqref{mm2} holds for any $s\in [1, N({\mathcal L})]$.
In particular, when $r=2$, and hence ${\mathcal L}={\mathcal L}^*$, every $S$ in \eqref{condition_lemma22} is obtained by removing a single index $\varkappa$ from $[2,k]$. Therefore, the sum in the left hand side of \eqref{condition_lemma22} is taken over a two-element set $\{1,\varkappa\}$ with $\varkappa\in [2,k]$. Since $\varepsilon_{1S}=k-2$ and $\varepsilon_{\varkappa S}=k-\varkappa$, $\alpha_\varkappa$ is determined uniquely as \begin{equation} \label{alphaB2} \alpha_\varkappa (Z) = (-1)^{\varkappa-1} \frac{ \det (\hat B^2)_{[1,\theta_2]\setminus\{\varkappa\}}} { \det (\hat B^2)_{ [2,\theta_2]}}, \quad \varkappa =1,\ldots, k. \end{equation} Therefore \eqref{condition_lemma22} is equivalent to vanishing of \begin{equation} \label{snovauzhas} \sum_{\varkappa \in [1,k]\setminus S} (-1)^{\varepsilon_{\varkappa S} +\varkappa} \det (\hat B^2)_{[1,\theta_2]\setminus\{\varkappa\}} \det (\hat B^r)_{S\cup\{\varkappa \}\cup [k+1, \theta_r]} = 0. \end {equation}
Denote $\bar S=[1,k]\setminus S$, then $\varepsilon_{\varkappa S}+\varepsilon_{\varkappa\bar S}=k-\varkappa$, and hence \eqref{snovauzhas} can be re-written as \begin{equation*} (-1)^k\sum_{\varkappa \in \bar S} (-1)^{\varepsilon_{\varkappa\bar S}} \det (\hat B^2)_{(\bar S\setminus \{\varkappa\}) \cup S \cup [k+1,\theta_2]} \det (\hat B^r)_{\{\varkappa \}\cup S\cup [k+1, \theta_r]} = 0. \end {equation*} The latter equation is similar to \eqref{uzhas} in the proof of Theorem \ref{matrixmap1}, and the current proof can be completed in exactly the same way taking into account that the denominator in \eqref{alphaB2} equals $\tilde f_{p+2,1}(Z)$. \end{proof}
There are two more versions of Theorem \ref{prototype} relating the cluster structures ${\mathcal C}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ and ${\mathcal C}_{\bfG^{{\rm r}},\tilde\bfG^{{\rm c}}}$, where $\tilde\bfG^{{\rm c}}=\tilde\bfG^{{\rm c}}(\overrightarrow{\Delta}^{\rm c})$ or $\tilde\bfG^{{\rm c}}=\tilde\bfG^{{\rm c}}(\overleftarrow{\Delta}^{\rm c})$ for a nontrivial column $X$-run $\Delta^{\rm c}$. They are obtained easily from Theorems \ref{matrixmap1} and~\ref{matrixmap2} via the involution \[ {\mathbf L}_{\bfG^{{\rm r}},\bfG^{{\rm c}}} \ni {\mathcal L}(X,Y) \mapsto {\mathcal L}(Y^T,X^T)^T\in {\mathbf L}_{\mathbf \Gamma^{\rm c}_{\rm opp},\mathbf \Gamma^{\rm r}_{\rm opp}}, \] where $\mathbf \Gamma_{\rm opp}=(\Gamma_2,\Gamma_1, \gamma^{-1}: \Gamma_2\to\Gamma_1)$ is the {\em opposite\/} BD triple to $\mathbf \Gamma=(\Gamma_1,\Gamma_2,\gamma:\Gamma_1\to\Gamma_2)$. Consequently, $X$ is obtained from $Z$ via multiplication by a lower triangular matrix, and the distinguished function $\tilde f_v(Z)$ equals $\tilde f_{1,q+k}(Z)$ for $\tilde\bfG^{{\rm c}}=\tilde\bfG^{{\rm c}}(\overrightarrow{\Delta}^{\rm c})$ and equals $\tilde f_{1,q+2}(Z)$ for $\tilde\bfG^{{\rm c}}=\tilde\bfG^{{\rm c}}(\overleftarrow{\Delta}^{\rm c})$.
\subsection{Handling adjacent clusters}\label{adjcl}
Let us continue the comparison of cluster structures ${\mathcal C}={\mathcal C}_{\bfG^{{\rm r}},\bfG^{{\rm c}}}$ and $\tilde{\mathcal C}={\mathcal C}_{\tilde\bfG^{{\rm r}},\bfG^{{\rm c}}}$, where $\tilde\bfG^{{\rm r}}=\tilde\bfG^{{\rm r}}(\overrightarrow{\Delta}^{\rm r})$. Recall that the corresponding initial quivers $Q$ and $\tilde Q$ differ as follows. The vertex $v=(p+k,1)$ is frozen in $\tilde Q$, but not in $Q$. Three of the edges incident to the vertex $(p+k,1)$ in $Q$---the one connecting it to the vertex $(p+k-1,1)$ and the two connecting it to the vertices $({\gamma^\er}(p+k-1),n)$ and $({\gamma^\er}(p+k-1)+1,n)$---are absent in $\tilde Q$ (in more detail, the neighborhood of $v$ in $Q$ looks as shown in Fig.~\ref{fig:i1nei}(b), Fig.~\ref{fig:n1nei}(a), or Fig.~\ref{fig:n1nei}(b), while the neighborhood of $v$ in $\tilde Q$ looks as shown in Fig.~\ref{fig:i1nei}(d), Fig.~\ref{fig:n1nei}(c), or Fig.~\ref{fig:n1nei}(d), respectively).
As it was explained in Section \ref{outline}, we have to establish an analog of Theorem~\ref{prototype} for the fields ${\mathcal F}'={\mathbb C}(\varphi_{11},\dots,\varphi'_u,\dots,\varphi_{nn})$ and $\tilde{\mathcal F}'={\mathbb C}(\tilde\fy_{11},\dots,\tilde\fy'_u,\dots\tilde\fy_{nn})$ and the map $T': {\mathcal F}'\to\tilde{\mathcal F}'$ given by \begin{equation}\label{Tprime} T'(\varphi_{ij})=\begin{cases} T(\varphi_{ij}) \quad &\text{for $(i,j)\ne u$,}\\
\tilde\fy'_u\tilde\fy^{\lambda_u}_v \quad & \text{for $(i,j)=u$}
\end{cases} \end{equation} for some integer $\lambda_u$, where $T: {\mathcal F}\to \tilde {\mathcal F}$ is the map constructed in Theorem \ref{matrixmap1}. The map $U:{\mathcal X}\to{\mathcal Z}$ is also borrowed from Theorem \ref{matrixmap1}, so condition b) in Theorem~\ref{prototype} holds true. Condition c) follows immediately from \eqref{Tprime}. Condition a) reads $\tilde f'\circ T'=U\circ f'$.
Recall that cluster mutation formulas provide isomorphisms $\mu: {\mathcal F}'\to{\mathcal F}$ and $\tilde\mu:\tilde {\mathcal F}'\to\tilde{\mathcal F}$ such that $f'=f\circ\mu$ and $\tilde f'=\tilde f\circ\tilde\mu$. Consequently, condition a) above would follow from $\tilde\mu\circ T'=T\circ\mu$. The latter statement can be reformulated as follows.
\begin{proposition} \label{phi-tilde-phi} Let $\tilde\psi$ be the cluster variable in ${\mathcal C}(\tilde Q, \tilde\fy)$ obtained via a sequence of mutations at vertices $(i_1,j_1), \ldots, (i_N,j_N)$ in $\tilde Q$ avoiding $v$, and let $\psi$ be a cluster variable in ${\mathcal C}(Q, \varphi)$ obtained via the same sequence of mutations in $Q$. Then $\psi = \tilde \psi \tilde \varphi^{\lambda_u}_{v}$ for some integer $\lambda_u$. \end{proposition}
\begin{proof} Define a quiver $Q_v$ by freezing the vertex $v$ in $Q$ and retaining all the edges from $v$ to non-frozen vertices. Then any sequence of mutations in $Q$ avoiding $v$ translates into the sequence of mutations in $Q_v$, and all the resulting cluster variables in ${\mathcal C}(Q,\varphi)$ and ${\mathcal C}(Q_v,\varphi)$ coincide. We will use the statement that describes the relation between cluster variables in two cluster structures whose initial quivers are ``almost the same''. That is, there is a bijection between vertices of these quivers that restricts to the bijection of subsets of frozen vertices and under this bijection the two quivers differ only in terms of edges incident to one specified frozen vertex.
\begin{lemma}\cite[Lemma 8.4]{GSVMem} \label{MishaSha} Let ${\widetilde{B}}$ and $B$ be integer $n\times (n+m)$ matrices that differ in the last column only. Assume that there exist $\tilde w, w\in{\mathbb C}^{n+m}$ such that ${\widetilde{B}}\tilde w=Bw=0$ and $\tilde w_{n+m}=w_{n+m}=1$. Then for any cluster $(x_1',\dots,x_{n+m}')$ in ${\mathcal C}({\widetilde{B}})$ there exists a collection of numbers $\lambda_i'$, $i\in [1,n+m]$, such that $x_i' x_{n+m}^{\lambda_i'}$ satisfy exchange relations of the cluster structure ${\mathcal C}(B)$. In particular, for the initial cluster $\lambda_i=w_i-\tilde w_i$, $i\in [1,n+m]$. \end{lemma}
In our current situation, ${\widetilde{B}}$ and $B$ are adjacency matrices of quivers $\tilde Q$ and $Q_v$, respectively. The last columns of ${\widetilde{B}}$ and $B$ correspond to the frozen vertex $(p+k,1)$. To establish the claim of Proposition \ref{phi-tilde-phi}, we just need to define appropriate weights $\tilde w$ and $w$ and to show that for any noon-frozen vertex $(i,j)$, $\lambda_{ij}= w_{ij}-\tilde w_{ij}$ coincides with the exponent of $\tilde f_{p+k,1}(Z)$ in the right hand side of the expression for $f_{ij}(X)$ in Theorem \ref{matrixmap1}.
Put $\tilde d_{ij}={\operatorname{deg}}\tilde f_{ij}(Z)$ and $d_{ij}={\operatorname{deg}} f_{ij}(X)$. A direct check proves that the vectors $\tilde d=(\tilde d_{ij})$ and $d=(d_{ij})$ satisfy relations ${\widetilde{B}}\tilde d=Bd=0$. Besides, $\tilde d_v=d_v=\delta$, and hence vectors $\tilde w=\frac1\delta\tilde d$ and $w=\frac1\delta d$ satisfy the conditions of Lemma \ref{MishaSha}. Moreover, $\tilde d_{ij}$ and $d_{ij}$ coincide for any $f_{ij}$ that is a minor of ${\mathcal L}\ne{\mathcal L}^*$, or a minor of ${\mathcal L}^*$ with $s(i,j)\ge s_k$. If $f_{ij}$ is a minor of ${\mathcal L}^*$ with $s(i,j)> s_k$ then $d_{ij}-\tilde d_{ij}=\delta$. Consequently $\lambda_{ij}$ satisfies the required condition. \end{proof}
\subsection{Base of induction: the case $|\Gamma^{\rm r}_1|+|\Gamma^{\rm c}_1|=1$}
It suffices to consider the case $|\Gamma^{\rm r}_1|=1$, $|\Gamma^{\rm c}_1|=0$, the other case can then be treated via taking the opposite BD triple. In this case all the reasoning exhibited in Sections \ref{matrixmaps} and \ref{adjcl} is still valid, so to complete the proof we only need to check that every matrix element $x_{\alpha\beta}$ can be expressed as a Laurent polynomial in terms of cluster variables in the cluster $\mu_v(F)$. We will do this directly.
Let $\bfG^{{\rm r}} =( \{p\}, \{q\}, p\mapsto q)$ with $q\ne p$ and $\bfG^{{\rm c}} =\varnothing$. The functions forming the initial cluster $F_{\bfG^{{\rm r}}, \varnothing}$ are $f_{ij}(X)= \det X_{[i,n]}^{[j, n-i +j]}$ for $i\geq j$, $f_{ij}(X)= \det X^{[j,n]}_{[i, n-j +i]}$ for $i < j$, $j-i \ne n-q$, and $f_{i,n-q+i}(X)= \det {\mathcal L}_{[i,N]}^{[i,N]}$ for $i\in [1, q]$, where $N=n-p+q$ and the $N\times N$ matrix ${\mathcal L}$ is given by \begin{equation} \label{easyL} {\mathcal L}=\begin{bmatrix} X_{[1,q-1]}^{[n-q+1,n]} & 0\\ X_{[q,q+1]}^{[n-q+1,n]} & X_{[p,p+1]}^{[1,n-p]} \\ 0 & X_{[p+2,n]}^{[1,n-p]} \end{bmatrix}. \end{equation} These last $q$ functions distinguish $F_{\bfG^{{\rm r}}, \varnothing}$ from $F_{\varnothing, \varnothing}$ that forms an initial cluster for the standard cluster structure on $GL_n$. Also, the function $f_{p+1,1}(X)= \det X_{[p+1,n]}^{[1, n-p]}$ is a frozen variable in ${\mathcal C}_{\varnothing, \varnothing}$, but is mutable in ${\mathcal C}_{\bfG^{{\rm r}}, \varnothing}$. The mutation at $v=(p+1,1)$ transforms $f_{p+1,1}(X)$ into \begin{equation}\label{newfv} \begin{aligned} f'_{p+1,1}(X)&= \frac { f_{p1}(X) f_{p+2,2 }(X) f_{q+1,n}(X) + f_{p+1,2 }(X) f_{qn}(X)}{f_{p+1,1}(X)}\\ &= \det \begin{bmatrix} X_{[q,q+1]}^{[n]} & X_{[p,p+1]}^{[2,n-p+1]} \\ 0 & X_{[p+2,n]}^{[2,n-p+1]} \end{bmatrix} \end{aligned} \end{equation} with $f_{p+2,2}(X)=1$ in case $p=n-1$, see Fig.~\ref{fig:i1nei}(b) and~\ref{fig:n1nei}(b). The last equality follows from the short Pl\"ucker relation based on columns $1,2,3, n-p+3$ applied to the $(n-p+1)\times (n-p+3)$ matrix \[ \begin{bmatrix} \begin{array}{c} 1\\ 0 \end{array} &X_{[q,q+1]}^{[n]} & X_{[p,p+1]}^{[1,n-p+1]} \\ 0 &0 & X_{[p+2,n]}^{[1,n-p+1]} \end{bmatrix}. \]
Observe that $\{f_{ij}(X) = f_{ij}\left(X_{[q+1,n]}^{[1,n]}\right): i\in [q+1,n], j\in [1,n]\}$ together with the restriction of $Q_{\varnothing,\varnothing}$ to its lower $n-q$ rows and freezing row $q+1$ form an initial cluster for the standard cluster structure ${\mathcal C}_q$ on $(n-q)\times n$ matrices. It follows immediately from \cite[Prop.~4.15]{GSVb} that every minor of $X$ with the row set in $[q+1,n]$ is a cluster variable in ${\mathcal C}_q$, and hence can be written as a Laurent polynomial in any cluster of ${\mathcal C}_q$. Note that for $p>q-2$ the variable $f_{p+1,1}(X)$ is frozen in ${\mathcal C}_q$, therefore, by~\cite[Prop.~3.20]{GSVb}, it does not enter the denominator of this Laurent polynomial; for $p\le q-2$ this variable does not exist in ${\mathcal C}_q$. Consequently, all such minors remain Laurent polynomials in the cluster adjacent to the initial one in ${\mathcal C}_{\bfG^{{\rm r}},\varnothing}$ after the mutation at $(p+1,1)$. In particular, for any $i\in [q+1,n]$, $j\in [1,n]$, $x_{ij}$ can be written as a Laurent polynomial in this cluster.
For $s\le q-1$, consider the sequence of consecutive mutations at $(s+1,n), \ldots, (s+1,s), (s+1,s+1), \ldots, (s+1,2)$ starting with the initial cluster in ${\mathcal C}_{\bfG^{{\rm r}},\varnothing}$ and denote the obtained cluster variables $ f'_{s+1,n-t+1}(X)$, $t\in [1,n-1]$. The same sequence of mutations in ${\mathcal C}_{\varnothing,\varnothing}$ produces cluster variables \begin{equation}\label{tfasminors} \begin{aligned} \tilde f'_{s+1,n-t+1}(Z)&=\det Z_{\{s\}\cup [s+2,s+t+1]}^{[n-t,n]}, \quad t\in [1,n-s -1],\\ \tilde f'_{s+1,n -t+1}(Z)&=\det Z_{\{s\}\cup [s+2,n]}^{[n-t,2n-t -s-1]}, \quad t\in [n-s,n -1]. \end{aligned} \end{equation} Indeed, every mutation in the sequence is applied to a four-valent vertex, and we obtain consecutively \[ \tilde f'_{s+1,n}(Z)= \frac{\tilde f_{s,n-1}(Z) \tilde f_{s+2,n}(Z) + \tilde f_{s+1,n-1}(Z) \tilde f_{sn}(Z)}{\tilde f_{s+1,n}(Z)} \] and \[ \tilde f'_{s+1,n-t}(Z)= \frac{\tilde f_{s,n-t-1}(Z) \tilde f_{s+2,n-t}(Z) + \tilde f_{s+1,n-t-1}(Z) \tilde f'_{s+1,n-t+1}(Z)}{\tilde f_{s+1,n-t}(Z)} \] for $t\in [1,n-2]$. Explicit formulas \eqref{tfasminors} now follow by applying an appropriate version of the short Pl\"ucker relation.
Recall that by Theorem \ref{matrixmap1}, $X$ and $Z$ differ only in the $q$-th row. Moreover, every minor of $X$ whose row set either does not contain $q$ or contains both $q$ and $q+1$ is equal to the corresponding minor of $Z$. Let $\tilde\psi(Z)$ be such a minor; invoking once again \cite[Prop.~4.15]{GSVb}, one can obtain it by a sequence of mutations in ${\mathcal C}_{\varnothing,\varnothing}$. Let $\psi(X)$ be the cluster variable obtained by applying the same sequence of mutations to the initial seed of ${\mathcal C}_{\bfG^{{\rm r}},\varnothing}$. By Proposition~\ref{phi-tilde-phi}, $\psi(X) = \tilde\psi(Z) \left (f_{p+1,1}(Z)\right )^\lambda=\tilde\psi(X) \left (f_{p+1,1}(X)\right )^\lambda$ for some integer $\lambda$. Clearly, minors in \eqref{tfasminors} satisfy the above condition unless $s+t+1=q$, and hence \[ f'_{s+1,n-t+1}(X) = \tilde f'_{s+1,n-t+1}(X) \left (f_{p+1,1}(X)\right )^{\lambda_{s+1,n-t+1}} \] for $t\ne q-s-1$. However, the exponents $\lambda_{s+1,n-t+1}$ are easily computed to be all zero. Thus, we conclude that \begin{equation}\label{firsttype} \det X_{\{s\}\cup [s+2,s+t+1]}^{[n-t,n]}= f'_{s+1,n-t+1}(X), \quad t\in [1,n-s -1]\setminus \{q-s-1\}, \end{equation} and \begin{equation}\label{sectype}
\det X_{\{s\}\cup [s+2,n]}^{[n-t,2n-t -s-1]}=f'_{s+1,n-t+1}(X), \quad t\in [n-s,n -1],
\end{equation}
are cluster variables in ${\mathcal C}_{\bfG^{{\rm r}},\varnothing}$.
Now we are ready to deal with the entries in the $q$-th row $X$. First, expand $f'_{p+1,1}(X)$ in \eqref{newfv} by the first column as \[ f'_{p+1,1}(X)= x_{qn} f_{p+1,2}(X) + x_{q+1,n} \det X_{\{p\}\cup [p+2,n]}^{[2, n-p+1]}. \] For $p>q$, the row set of $\det X_{\{p\}\cup [p+2,n]}^{[2, n-p+1]}$ lies completely within the last $n-q$ rows of $X$, and hence, as explained above, it is a Laurent polynomial in the cluster we are interested in. For $p<q$, this determinant is a cluster variable in ${\mathcal C}_{\bfG^{{\rm r}},\varnothing}$ by \eqref{sectype} with $t=n-2$, and hence it is a Laurent polynomial in any cluster in ${\mathcal C}_{\bfG^{{\rm r}},\varnothing}$. Consequently, in both cases $x_{qn}$ is a Laurent polynomial in the cluster we are interested in. Further, this claim can be established inductively for $x_{q,n-1}, x_{q,n-2},\ldots, x_{q 1}$ by expanding first the minors $f_{q,n-t}(X)=\det X_{[q,q+t]}^{[n-t,n]}$, $t\in [1,n-q]$, and then the minors $f_{q,n-t}(X)=\det X_{[q,n]}^{[n-t,2n-t-q]}$, $t\in [n-q+1, n-1]$, by the first row as $f_{q,n-t}(X) = x_{q,n-t} f_{q+1,n-t+1}(X) + P(x_{q, n-t+1},\ldots, x_{qn}, x_{ij}\ : i > q)$, where $P$ is a polynomial.
Finally, for $s < q$, $x_{sn}$ is a cluster variable in ${\mathcal C}_{\bfG^{{\rm r}},\varnothing}$, and hence is a Laurent polynomial in any cluster. For $t=1, \ldots, q-s-1$, Laurent polynomial expressions for $x_{s,n-t}$ can obtained recursively using expansions of the cluster variable $f_{s,n-t}(X)=\det X_{[s,s+t]}^{[n-t,n]}$ by the first row exactly as above. For $t=q-s,\dots,n-s-1$, such expressions are obtained recursively by expanding the cluster variable $f'_{s+1,n-t+1}(X)$ given by \eqref{firsttype} by the first row as $f'_{s+1,n-t+1}(X) = x_{s,n-t} f_{s+2,n-t+1}(X) + P'(x_{s, n-t+1},\ldots, x_{sn}, x_{ij}\ : i > s)$, where $P'$ is a polynomial. For $t=n-s,\dots,n-1$ we use the same expansion for $f'_{s+1,n-t+1}(X)$ given by \eqref{sectype}. This completes the proof.
\begin{remark} In fact, one can show that every minor of $X$ whose row set either does not contain $q$ or contains both $q$ and $q+1$ is a cluster variable in ${\mathcal C}_{\bfG^{{\rm r}},\varnothing}$. \end{remark}
\subsection{Auxiliary statements} In this section we collected several technical statements that were used before.
\begin{lemma} \label{blockmatrix} Let $N=N_1 + N_2$, $k = k_1 + k_2$, and let ${\mathcal M}$, $\tilde {\mathcal M}$ be two $N\times N$ matrices \begin{equation}\label{twomatrices} {\mathcal M} = \left [ \begin{array}{@{}cc@{} } A_1 & 0\\ A_2 & B_1\\ 0 & B_2 \end{array} \right ],\qquad \tilde{\mathcal M} = \left [ \begin{array}{@{}cc@{} } \tilde A_1 & 0\\ \tilde A_2 & \tilde B_1\\ 0 & \tilde B_2 \end{array} \right ], \end{equation} with block rows of sizes $N_1-k_1$, $k$ and $N_2-k_2$ and block columns of sizes $N_1$ and $N_2$. Assume that
{\rm (i)} $A_1=\tilde A_1$;
{\rm (ii)} there exists $A'_2$ such that $A_2=\left (\mathbf 1_k + \sum_{i=1}^{k-1} \alpha_i e_{ik} \right )A'_2$ and $\tilde A_2$ is obtained from $A'_2$ by replacing the last row with zeros;
{\rm (iii)} every maximal minor of $B=\begin{bmatrix} B_1\\ B_2\end{bmatrix}$ that contains the last $N_2 - k_2$ rows coincides with the corresponding minor of $\tilde B=\begin{bmatrix} \tilde B_1\\ \tilde B_2\end{bmatrix}$.
Then conditions \begin{equation} \label{condition_lemma} \sum_{\varkappa \in S} (-1)^{\varepsilon_{\varkappa S}} \alpha_\varkappa \det B_{S\setminus\{\varkappa \}\cup [k+1, N_2+k_1]}=0 \end {equation}
for any $S\subset [1,k]$ such that $|S|=k_2+1$ and $k\in S$ guarantee that \begin{equation} \label{principal} \det {\mathcal M}_{[s,N]}^{[s,N]} = \det \tilde {\mathcal M}_{[s,N]}^{[s,N]} \end{equation} for all $s\in [1,N]$; here $\varepsilon_{\varkappa S} = \# \{ i\in S : i > \varkappa\}$ and $\alpha_k=1$. \end{lemma}
\begin{proof} Denote \[ \xi_s=\det {\mathcal M}_{[s, N]}^{[s, N]},\qquad \tilde\xi_s=\det \tilde{\mathcal M}_{[s, N]}^{[s, N]}. \] By condition (iii), we only need to consider $s \leq N_1$. First, fix $s \in [N_1 - k_1 +1, N_1]$, which means that ${\mathcal M}_{ss}$ is in the block $A_2$. We use the Laplace expansion of $\xi_s$ and $\tilde\xi_s$ with respect to the second block column. Define $t=s-N_1+k_1$, then \begin{equation} \label{laplace1} \begin{aligned} \xi_s &= \sum_{T} (-1)^{\varepsilon_T} \det( A_2)_{T}^{\Theta} \det B_{\bar T\cup[k+1, N_2+k_1]},\\ \tilde\xi_s &= \sum_{T} (-1)^{\varepsilon_T} \det( \tilde A_2)_{T}^{\Theta} \det \tilde B_{\bar T\cup [k+1, N_2+k_1]}, \end{aligned} \end{equation} where the sum is taken over all $(N_1 -s +1)$-element subsets $T$ in $[t, k]$, $\bar T=[t, k]\setminus T$, $\Theta=[s, N_1]$ and $\varepsilon_T=\sum_{i\in T}i+\varepsilon_s$ with $\varepsilon_s$ depending only on $s$.
By condition (ii), \begin{equation} \label{Ztheta1} \det (A_2)_{T}^{\Theta} = \begin{cases} \det ( A_2')_{T}^{\Theta} &\ \mbox{if}\ k \in T,\\ \det ( A_2')_{T}^{\Theta} + \sum\limits_{\varkappa \in T} (-1)^{\varepsilon_{\varkappa T}}\alpha_\varkappa\det (A_2')_{\left( T\setminus \{\varkappa\}\right ) \cup \{k\}}^{\Theta} &\ \mbox{if}\ k \notin T, \end{cases} \end{equation} and \begin{equation} \label{Ztheta2} \det (\tilde A_2)_{T}^{\Theta} =\begin{cases} 0 &\ \mbox{if}\ k \in T,\\ \det ( A_2')_{T}^{\Theta} &\ \mbox{if}\ k \notin T. \end{cases} \end{equation} Besides, $\det B_{\bar T\cup[k+1, N_2+k_1]}=\det \tilde B_{\bar T\cup [k+1, N_2+k_1]}$ by condition (iii). Therefore, the difference $\xi_s - \tilde\xi_s$ can be written as a linear combination of $\det(A_2')_{T}^{\Theta}$
such that $k \in T$. Let $T = T'\cup \{k\}$; define $S=\bar T'=\bar T\cup\{k\}$, then $|S|=k_2+1$ and $k\in S$. The coefficient at $\det(A_2')_{T}^{\Theta}$ equals, up to a sign, \begin{multline}
\sum_{\varkappa\in [t,k]\setminus T'} (-1)^{\varepsilon_{\varkappa, T'\cup\{k\}}+\varkappa} \alpha_\varkappa \det B_{(S\setminus\{\varkappa\})\cup [k+1, N_2+k_1]} \\ =(-1)^{k} \sum_{\varkappa \in S} (-1)^{\varepsilon_{\varkappa S}} \alpha_\varkappa \det B_{(S\setminus\{\varkappa \})\cup [k+1, N_2+k_1]}, \end {multline} since $\varepsilon_{\varkappa, T'\cup\{k\}}+\varepsilon_{\varkappa S}=k-\varkappa$. Thus for \eqref{principal} to be valid for $s \in [N_1 - k_1 +1, N_1]$ it is sufficient that \eqref{condition_lemma} be satisfied for any $S\subset [t,k]$,
$|S|=k_2+1$, $k\in S$. In fact, since \eqref{Ztheta1} and \eqref{Ztheta2} remain valid for any set $\Theta \subset [1,N_1]$ of size $|\Theta|=N_1-s+1$, similar considerations show that \eqref{condition_lemma} implies \begin{equation} \label{Thetas} \det {\mathcal M}_{[s,N]}^{\Theta\cup [N_1+1,N]} = \det \tilde {\mathcal M}_{[s,N]}^{\Theta\cup [N_1+1,N]} \end{equation} for any such $\Theta$ and $s \in [N_1 - k_1 +1, N_1]$. This, in turn, results in \eqref{principal} being valid for all $s\in [1, N_1 - k_1]$. To see this, one has to use the Laplace expansion of $\xi_s$ and $\tilde\xi_s$ with respect to the block row $[s,N_1-k_1]$: \begin{align*} \xi_s &= \sum_{\Theta} (-1)^{\varepsilon_{\bar\Theta}} \det( A_1)_{[s,N_1-k_1]}^{\bar\Theta} \det {\mathcal M}_{[N_1-k_1+1, N]}^{\Theta\cup [N_1+1,N]},\\ \tilde\xi_s &= \sum_{\Theta} (-1)^{\varepsilon_{\bar\Theta}} \det(\tilde A_1)_{[s,N_1-k_1]}^{\bar\Theta} \det \tilde{\mathcal M}_{[N_1-k_1+1, N]}^{\Theta\cup [N_1+1,N]}, \end{align*} where $\bar\Theta=[s,N_1]\setminus\Theta$, and the sums are taken over all subsets $\Theta$ in $[s,N_1]$ of size
$|\Theta|=k_1$. It remains to note that $\det( A_1)_{[s,N_1-k_1]}^{\bar\Theta} =\det(\tilde A_1)_{[s,N_1-k_1]}^{\bar\Theta}$ by condition (i), and $ \det {\mathcal M}_{[N_1-k_1+1, N]}^{\Theta\cup [N_1+1,N]}=\det \tilde{\mathcal M}_{[N_1-k_1+1, N]}^{\Theta\cup [N_1+1,N]}$ is a particular case of \eqref{Thetas} for $s=N_1-k_1 +1$. \end{proof}
\begin{lemma} \label{blockmatrix2} Let ${\mathcal M}$ and $\tilde{\mathcal M}$ be two $N\times N$ matrices given by \eqref{twomatrices} with the same sizes of block rows and block columns. Assume that
{\rm (i)} $A_1=\tilde A_1$;
{\rm (ii)} $A_2=\left (\mathbf 1_k + \sum_{i=2}^{k} \alpha_i e_{1i} \right )\tilde A_2$;
{\rm (iii)} $\tilde B_1$ is obtained from $B_1$ by replacing the first row with zeros;
{\rm (iv)} every maximal minor of $B=\begin{bmatrix} B_1\\ B_2\end{bmatrix}$ that contains the last $N_2 - k_2$ rows and does not contain the first row coincides with the corresponding minor of $\tilde B=\begin{bmatrix} \tilde B_1\\ \tilde B_2\end{bmatrix}$.
Then conditions \begin{equation} \label{condition_lemma2} \sum_{\varkappa \in [1,k]\setminus S} (-1)^{\varepsilon_{\varkappa S}} \alpha_\varkappa \det B_{S\cup\{\varkappa \}\cup [k+1, N_2+k_1]}=0 \end {equation}
for any $S\subset [2,k]$ such that $|S|=k_2-1$ guarantee that \begin{equation} \label{principal2} \det {\mathcal M}_{[s,N]}^{[s,N]} = \det \tilde {\mathcal M}_{[s,N]}^{[s,N]} \end{equation} for all $s\in [1,N]$; here $\alpha_1=1$. \end{lemma}
\begin{proof} The proof is a straightforward modification of the proof of Lemma \ref{blockmatrix}. For $s \in [N_1-k_1+2, N_1]$, Laplace expansions of $\xi_s$ and $\tilde\xi_s$ with respect to the second block column are given by \eqref{laplace1}. By condition (ii), $\det (A_2)_T^\Theta=\det(\tilde A_2)_T^\Theta$, while by condition (iv), $\det B_{\bar T\cup[k+1, N_2+k_1]}=\det \tilde B_{\bar T\cup [k+1, N_2+k_1]}$. Consequently, $\xi_s-\tilde\xi_s$ vanishes, and hence \eqref{principal2} holds true.
For $s\in [1,N_1-k_1+1]$, the corresponding Laplace expansions are given by \begin{equation*} \begin{aligned} \xi_s &= \sum_{T} (-1)^{\varepsilon_T} \det A_{[s,N_1-k_1]\cup T}^{[s,N_1]} \det B_{\overleftarrow{T}\cup[k+1, N_2+k_1]},\\ \tilde\xi_s &= \sum_{T} (-1)^{\varepsilon_T}\det \tilde A_{[s,N_1-k_1]\cup T}^{[s,N_1]} \det \tilde B_{\overleftarrow{T}\cup [k+1, N_2+k_1]}, \end{aligned} \end{equation*} where $T$ runs over all $k_1$-element subsets in $[N_1-k_1+1, N_1 + k_2]$ and $\overleftarrow{T}=\{ i-N_1+k_1 {:\ } i\in\bar T\}$ for $\bar T=[N_1-k_1+1, N_1 + k_2]\setminus T$.
Next, by conditions (i) and (ii), \begin{equation*} \det A_{\Xi\cup T}^{[s,N_1]} = \begin{cases} \det \tilde A_{\Xi\cup T}^{[s,N_1]} &\ \mbox{if}\ t \notin T, \\ \det \tilde A_{\Xi\cup T}^{[s,N_1]} + \sum\limits_{\chi \notin T} (-1)^{k_1-1-\varepsilon_{\chi T}}\alpha_\varkappa\det \tilde A_{\Xi\cup\left( T\setminus \{t\}\right ) \cup \{\chi\}}^{[s,N_1]} &\ \mbox{if}\ t \in T, \end{cases} \end{equation*} where $\Xi=[s,N_1-k_1]$, $t=N_1-k_1+1$ and $\varkappa=\chi-N_1+k_1\in[1, k]$. Further, by conditions (iii) and (iv), \begin{equation*} \det \tilde B_{\overleftarrow{T}\cup [k+1, N_2+k_1]}=\begin{cases} 0 &\ \mbox{if}\ t \notin T, \\ \det B_{\overleftarrow{T}\cup[k+1, N_2+k_1]} &\ \mbox{if}\ t \in T. \end{cases} \end{equation*} Therefore, the difference $\xi_s - \tilde\xi_s$ can be written as a linear combination of $\det \tilde A_{\Xi\cup T}^{[s,N_1]}$ such that $t \notin T$. Let $\bar T=\{t\}\cup\bar T'$; define $S=\overleftarrow{T}'=\overleftarrow{T}\setminus\{1\}$, then
$S\subset[2, k]$ and $|S|=k_2-1$. Consequently, the coefficient at $\det \tilde A_{\Xi\cup T}^{[s,N_1]}$ equals, up to a sign, \begin{equation*} \sum_{\varkappa \in [1,k]\setminus S} (-1)^{\varepsilon_{\varkappa S} } \alpha_\varkappa \det B_{S\cup\{\varkappa \}\cup [k+1, N_2+k_1]}, \end{equation*} and the claim follows. \end{proof}
\begin{lemma} Let $A$ be a rectangular matrix, $I=(i_1,\ldots i_k)$ and $J$ be disjoint row sets, $L$ and $M$ be disjoint column sets,
and $|L|=|J| +1$, $|M|=|I|-2$. Then \begin{equation}\label{plu2}
\sum_{\lambda =1}^k (-1)^\lambda\det A_{\{i_\lambda\}\cup J}^L \det A_{(I\setminus \{i_\lambda\})\cup J}^{L\cup M}=0. \end{equation} \end{lemma}
\begin{proof} The formula can be obtained from standard Pl\"ucker relations via a natural interpretation of minors of $A$ as Pl\"ucker coordinates for $\left [ \mathbf 1 \ A\right]$. \end{proof}
\end{document} | arXiv |
Equilibrium measures, prehistories distributions and fractal dimensions for endomorphisms
On the asymptotic behaviour of the Lebesgue measure of sum-level sets for continued fractions
July 2012, 32(7): 2453-2484. doi: 10.3934/dcds.2012.32.2453
The transfer operator for the Hecke triangle groups
Dieter Mayer 1, , Tobias Mühlenbruch 2, and Fredrik Strömberg 3,
Lower Saxony Professorship, Institute for Theoretical Physics, TU Clausthal, D-38678 Clausthal-Zellerfeld, Germany
Department of Mathematics and Computer Science, FernUniversität in Hagen, D-58084 Hagen, Germany
Department of Mathematics, TU Darmstadt, D-64289 Darmstadt, Germany
Received December 2009 Revised March 2010 Published March 2012
In this paper we extend the transfer operator approach to Selberg's zeta function for cofinite Fuchsian groups to the Hecke triangle groups $G_q,\, q=3,4,\ldots$, which are non-arithmetic for $q\not= 3,4,6$. For this we make use of a Poincar\'e map for the geodesic flow on the corresponding Hecke surfaces, which has been constructed in [13], and which is closely related to the natural extension of the generating map for the so-called Hurwitz-Nakada continued fractions. We also derive functional equations for the eigenfunctions of the transfer operator which for eigenvalues $\rho =1$ are expected to be closely related to the period functions of Lewis and Zagier for these Hecke triangle groups.
Keywords: $\lambda_q$-continued fractions, Ruelle and Selberg zeta function., Hecke triangle groups, transfer operator.
Mathematics Subject Classification: Primary: 11M36, 37C30; Secondary: 37B10, 37D35, 37D40, 37D2.
Citation: Dieter Mayer, Tobias Mühlenbruch, Fredrik Strömberg. The transfer operator for the Hecke triangle groups. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2453-2484. doi: 10.3934/dcds.2012.32.2453
R. W. Bruggeman, J. Lewis and D. Zagier, Period functions for Maaß wave forms. II: Cohomology,, preprint., (). Google Scholar
R. W. Bruggeman and T. Mühlenbruch, Eigenfunctions of transfer operators and cohomology,, Journal of Number Theory, 129 (2009), 158. doi: 10.1016/j.jnt.2008.08.003. Google Scholar
C.-H. Chang and D. Mayer, Thermodynamic formalism and Selberg's zeta function for modular groups,, Regul. Chaotic Dyn., 5 (2000), 281. doi: 10.1070/rd2000v005n03ABEH000150. Google Scholar
C.-H. Chang and D. Mayer, Eigenfunctions of the transfer operators and the period functions for modular groups,, in, 290 (2001), 1. Google Scholar
M. Fraczek, D. Mayer and T. Mühlenbruch, A realization of the Hecke algebra on the space of period functions for $\Gamma_0(n)$,, J. Reine Angew. Math., 603 (2007), 133. doi: 10.1515/CRELLE.2007.014. Google Scholar
D. Hejhal, "The Selberg Trace Formula for $\PSL(2,\mathbbR)$," Vol. 2,, Lecture Notes in Mathematics, 1001 (1983). Google Scholar
J. Hilgert, D. Mayer and H. Movasati, Transfer operators for $\Gamma_0(n)$ and the Hecke operators for the period functions of $\PSL(2,\mathbbZ)$,, Math. Proc. Camb. Phil. Soc., 139 (2005), 81. doi: 10.1017/S0305004105008480. Google Scholar
A. Hurwitz, Über eine besondere Art der Kettenbruch-Entwickelung reeller Grössen,, Acta Math., 12 (1889), 367. doi: 10.1007/BF02391885. Google Scholar
J. Lewis and D. Zagier, Period functions for Maass wave forms. I.,, Ann. of Math., 153 (2001), 191. doi: 10.2307/2661374. Google Scholar
D. Mayer, On the thermodynamic formalism for the Gauss map,, Comm. Math. Phys., 130 (1990), 311. doi: 10.1007/BF02473355. Google Scholar
D. Mayer, On composition operators on Banach spaces of holomorphic functions,, Journal of Functional Analysis, 35 (1980), 191. doi: 10.1016/0022-1236(80)90004-X. Google Scholar
D. Mayer and T. Mühlenbruch, Nearest $\lambda_q$-multiple fractions,, in, 52 (2010), 147. Google Scholar
D. Mayer and F. Strömberg, Symbolic dynamics for the geodesic flow on Hecke surfaces,, Journal of Modern Dynamics, 2 (2008), 581. doi: 10.3934/jmd.2008.2.581. Google Scholar
H. Nakada, Continued fractions, geodesic flows and Ford circles,, in, (1995), 179. Google Scholar
R. Phillips and P. Sarnak, On cusp forms for co-finite subgroups of $PSL(2,\mathbbR)$,, Invent. Math., 80 (1985), 339. doi: 10.1007/BF01388610. Google Scholar
D. Rosen, A class of continued fractions associated with certain properly discontinuous groups,, Duke Math. J., 21 (1954), 549. doi: 10.1215/S0012-7094-54-02154-7. Google Scholar
D. Rosen and T. A. Schmidt, Hecke groups and continued fractions,, Bull. Austral. Math. Soc., 46 (1992), 459. doi: 10.1017/S0004972700012120. Google Scholar
D. Ruelle, "Dynamical Zeta Functions for Piecewise Monotone Maps of the Interval,", CRM Monograph Series, 4 (1994). Google Scholar
T. A. Schmidt and M. Sheingorn, Length spectra of the Hecke triangle groups,, Mathematische Zeitschrift, 220 (1995), 369. doi: 10.1007/BF02572621. Google Scholar
A. Selberg, Remarks on the distribution of poles of Eisenstein series,, in, 3 (1990), 251. Google Scholar
F. Strömberg, Computation of Selberg's zeta functions on Hecke triangle groups,, \arXiv{0804.4837}., (). Google Scholar
Frédéric Naud, Anke Pohl, Louis Soares. Fractal Weyl bounds and Hecke triangle groups. Electronic Research Announcements, 2019, 26: 24-35. doi: 10.3934/era.2019.26.003
Doug Hensley. Continued fractions, Cantor sets, Hausdorff dimension, and transfer operators and their analytic extension. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2417-2436. doi: 10.3934/dcds.2012.32.2417
J. William Hoffman. Remarks on the zeta function of a graph. Conference Publications, 2003, 2003 (Special) : 413-422. doi: 10.3934/proc.2003.2003.413
Frédéric Naud. The Ruelle spectrum of generic transfer operators. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2521-2531. doi: 10.3934/dcds.2012.32.2521
Patricia Domínguez, Peter Makienko, Guillermo Sienra. Ruelle operator and transcendental entire maps. Discrete & Continuous Dynamical Systems - A, 2005, 12 (4) : 773-789. doi: 10.3934/dcds.2005.12.773
Lulu Fang, Min Wu. Hausdorff dimension of certain sets arising in Engel continued fractions. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2375-2393. doi: 10.3934/dcds.2018098
Marc Kessböhmer, Bernd O. Stratmann. On the asymptotic behaviour of the Lebesgue measure of sum-level sets for continued fractions. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2437-2451. doi: 10.3934/dcds.2012.32.2437
Vesselin Petkov, Luchezar Stoyanov. Ruelle transfer operators with two complex parameters and applications. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6413-6451. doi: 10.3934/dcds.2016077
Leandro Cioletti, Artur O. Lopes. Interactions, specifications, DLR probabilities and the Ruelle operator in the one-dimensional lattice. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6139-6152. doi: 10.3934/dcds.2017264
Mark F. Demers, Hong-Kun Zhang. Spectral analysis of the transfer operator for the Lorentz gas. Journal of Modern Dynamics, 2011, 5 (4) : 665-709. doi: 10.3934/jmd.2011.5.665
Yijing Sun. Estimates for extremal values of $-\Delta u= h(x) u^{q}+\lambda W(x) u^{p}$. Communications on Pure & Applied Analysis, 2010, 9 (3) : 751-760. doi: 10.3934/cpaa.2010.9.751
Huangsheng Yu, Feifei Xie, Dianhua Wu, Hengming Zhao. Further results on optimal $ (n, \{3, 4, 5\}, \Lambda_a, 1, Q) $-OOCs. Advances in Mathematics of Communications, 2019, 13 (2) : 297-312. doi: 10.3934/amc.2019020
Yunping Jiang, Yuan-Ling Ye. Convergence speed of a Ruelle operator associated with a non-uniformly expanding conformal dynamical system and a Dini potential. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4693-4713. doi: 10.3934/dcds.2018206
Christoph Bandt, Helena PeÑa. Polynomial approximation of self-similar measures and the spectrum of the transfer operator. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4611-4623. doi: 10.3934/dcds.2017198
Yury Arlinskiĭ, Eduard Tsekanovskiĭ. Constant J-unitary factor and operator-valued transfer functions. Conference Publications, 2003, 2003 (Special) : 48-56. doi: 10.3934/proc.2003.2003.48
Dieter Mayer Tobias Mühlenbruch Fredrik Strömberg | CommonCrawl |
Antonín Václav Šourek
Antonín Václav Šourek (June 3, 1857, Písek – February 20, 1926, Sofia) was a Czech mathematician, noteworthy as one of the founders of modern mathematics in Bulgaria (which became modernized after the Treaty of San Stefano.)
Antonín Šourek graduated in 1876 from a Realschule in Písek. From 1876 to 1878 he studied at TU Wien, where he attended lectures on mathematics, physics, and descriptive geometry. He was a student of Emil Weyr. Šourek then went to Czech Technical University in Prague, where he furthered his knowledge of mathematics, physics, and descriptive geometry. In 1880 he passed the examination certifying teaching competence in mathematics and descriptive geometry and went to Bulgaria. There in September 1880 he became a mathematics teacher at the Realschule in Slivna. A year later he was transferred from Slivna to Plovdiv. From there, in 1890, he moved to the Realschule in Sofia, where he was almost simultaneously appointed professor extraordinarius at Sofia University (founded in October 1888). In 1893 he resigned from the Realschule and completely transferred to Sofia University, where he was appointed professor ordinarius in 1898. From the years 1893 to 1902, while continuing his professorial duties at Sofia University, he lectured on descriptive geometry at a school for teacher training.[1] He was an invited speaker at the International Congress of Mathematicians in 1904 at Heidelberg.[2] Beginning in 1893 he also became a professor of descriptive geometry at the Military Academy in Sofia, where he taught for nine years. From 1895 to 1912 he lectured on perspective at the Academy of Painting in Sofia. In 1914 his bad health compelled him to resign his professorship at Sofia University and to move to Rome, where he became an unsalaried secretary of the military attaché. At the beginning of 1916 Šourek went to Bern, where he helped care for Bulgarian war prisoners. At the request of university administrators, after the end of WWI he returned to Sofia University and taught there from 1921 until his death in 1926.[3]
Šourek wrote Bulgarian textbooks on plane trigonometry, solid geometry, analytic geometry, spherical trigonometry, and descriptive geometry. He published his Bulgarian mathematical lectures on projective geometry (1909), differential geometry (1911), analytical geometry (1912, 1914), and descriptive geometry (1914). Perhaps his two most important translations into Bulgarian are Alois Strnad's Geometrie pro vyšší třídy reálných gymnázií (Bulgarian title: Геометрия за висшите класове на реалните гимназии, Geometry for upper classes of state gymnasia) and Emanuel Taftl’s textbook Algebra pro vyšší třídy středních škol (Bulgarian title: Алгебра за горните класове на гимназиалните училища, Algebra for upper classes of secondary schools).[3]
A. V. Šourek is also considered to be the founder of the Bulgarian terminology in descriptive geometry. Thanks to his good knowledge of Bulgarian and other languages (Czech, German, French, Italian), his deep knowledge of syntax, close cooperation with philologists and above all to his perfect knowledge of descriptive geometry itself, he developed a very successful system of the essential terms with wide possibilities of a more detailed evolution. Thanks to his method and prestige among the members of the Bulgarian community, most of his terms are used without any change or at most with only small modifications.[3]
References
1. Folta, Jaroslava; Šišma, Pavel. "Antonín Václav Šourek". Department of Mathematics and Statistics, Masaryk University (web.math.muni.cz).
2. "Über den mathematischen Unterrich in Bulgarien von A. Šourek". Verhandlungen des dritten Mathematiker-Kongresses in Heidelberg von 8. bis 13. August 1904 (PDF). Leipzig: B. G. Teubner. 1905. pp. 651–666.
3. Bečvářová, Martina (2014). "The role of Czech mathematicians in the Balkans (1850‒1900)" (PDF). Czasopismo Techniczne. (See pp. 22–24.)
Authority control
International
• ISNI
• 2
• VIAF
• 2
National
• Germany
• 2
• Czech Republic
| Wikipedia |
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS
MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS
Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS
The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance.
ISSN 1088-9485(online) ISSN 0273-0979(print)
Journals Home Search My Subscriptions Subscribe
Your device is paired with
for another days.
Previous issue | This issue | Most recent issue | All issues (1891–Present) | Next issue | Previous article | Articles in press | Recently published articles | Next article
A survey of integral representation theory
Author: Irving Reiner
Journal: Bull. Amer. Math. Soc. 76 (1970), 159-227
MSC (1970): Primary 1075, 1548, 2080, 1640; Secondary 1069, 1620
DOI: https://doi.org/10.1090/S0002-9904-1970-12441-7
MathSciNet review: 0254092
Full-text PDF Free Access
References | Similar Articles | Additional Information
References [Enhancements On Off] (What's this?)
Maurice Auslander and Oscar Goldman, Maximal orders, Trans. Amer. Math. Soc. 97 (1960), 1–24. MR 117252, DOI https://doi.org/10.1090/S0002-9947-1960-0117252-7
Maurice Auslander and Oscar Goldman, The Brauer group of a commutative ring, Trans. Amer. Math. Soc. 97 (1960), 367–409. MR 121392, DOI https://doi.org/10.1090/S0002-9947-1960-0121392-6
Gorô Azumaya, Corrections and supplementaries to my paper concerning Krull-Remak-Schmidt's theorem, Nagoya Math. J. 1 (1950), 117–124. MR 37832
4. D. Ballew, The module index, projective modules and invertible ideals, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1969.
David W. Ballew, The module index and invertible ideals, Trans. Amer. Math. Soc. 148 (1970), 171–184. MR 255589, DOI https://doi.org/10.1090/S0002-9947-1970-0255589-8
B. Banaschewski, Integral group rings of finite groups, Canad. Math. Bull. 10 (1967), 635–642. MR 232864, DOI https://doi.org/10.4153/CMB-1967-061-0
P. M. Gudivok and L. F. Barannik, Projective representations of finite groups over rings, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1968 (1968), 294–297 (Ukrainian, with Russian and English summaries). MR 0228597
L. F. Barannik and P. M. Gudivok, Indecomposable projective representations of finite groups, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1969 (1969), 391–393, 472 (Ukrainian, with English and Russian summaries). MR 0276370
Hyman Bass, Finitistic dimension and a homological generalization of semi-primary rings, Trans. Amer. Math. Soc. 95 (1960), 466–488. MR 157984, DOI https://doi.org/10.1090/S0002-9947-1960-0157984-8
Hyman Bass, Projective modules over algebras, Ann. of Math. (2) 73 (1961), 532–542. MR 177012, DOI https://doi.org/10.2307/1970315
Hyman Bass, Torsion free and projective modules, Trans. Amer. Math. Soc. 102 (1962), 319–327. MR 140542, DOI https://doi.org/10.1090/S0002-9947-1962-0140542-0
Hyman Bass, On the ubiquity of Gorenstein rings, Math. Z. 82 (1963), 8–28. MR 153708, DOI https://doi.org/10.1007/BF01112819
H. Bass, $K$-theory and stable algebra, Inst. Hautes Études Sci. Publ. Math. 22 (1964), 5–60. MR 174604
Hyman Bass, The Dirichlet unit theorem, induced characters, and Whitehead groups of finite groups, Topology 4 (1965), 391–410. MR 193120, DOI https://doi.org/10.1016/0040-9383%2866%2990036-X
14. H. Bass, Algebraic K-theory, Math. Lecture Note Series, Benjamin, New York, 1968.
Edward A. Bender, Classes of matrices and quadratic fields, Linear Algebra Appl. 1 (1968), 195–201. MR 230741, DOI https://doi.org/10.1016/0024-3795%2868%2990003-7
S. D. Berman, On certain properties of integral group rings, Doklady Akad. Nauk SSSR (N.S.) 91 (1953), 7–9 (Russian). MR 0056603
S. D. Berman, On isomorphism of the centers of group rings of $p$-groups, Doklady Akad. Nauk SSSR (N.S.) 91 (1953), 185–187 (Russian). MR 0056604
S. D. Berman, On a necessary condition for isomorphism of integral group rings, Dopovidi Akad. Nauk Ukrain. RSR 1953 (1953), 313–316 (Ukrainian, with Russian summary). MR 0059909
S. D. Berman, On the equation $x^m=1$ in an integral group ring, Ukrain. Mat. Ž. 7 (1955), 253–261 (Russian). MR 0077521
S. D. Berman, On certain properties of group rings over the field of rational numbers, Užgorod. Gos. Univ. Nau�n. Zap. Him. Fiz. Mat. 12 (1955), 88–110 (Russian). MR 0097451
21. S. D. Berman, On automorphisms of the center of an integral group ring, Užgorod. Gos. Univ. Naučn. Zap. Him. Fiz. Mat. (1960), no. 3, 55. (Russian)
S. D. Berman, Integral representations of finite groups, Dokl. Akad. Nauk SSSR 152 (1963), 1286–1287 (Russian). MR 0154910
S. D. Berman, On the theory of integral representations of finite groups, Dokl. Akad. Nauk SSSR 157 (1964), 506–508 (Russian). MR 0165017
S. D. Berman, Integral representations of a cyclic group containing two irreducible rational components, In Memoriam: N. G. Čebotarev (Russian), Izdat. Kazan. Univ., Kazan, 1964, pp. 18–29 (Russian). MR 0195958
S. D. Berman, On integral monomial representations of finite groups, Uspehi Mat. Nauk 20 (1965), no. 4 (124), 133–134 (Russian). MR 0195959
S. D. Berman, Representations of finite groups over an arbitrary field and over rings of integers, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 69–132 (Russian). MR 0197582
S. D. Berman and P. M. Gudivok, Integral representations of finite groups, Dokl. Akad. Nauk SSSR 145 (1962), 1199–1201 (Russian). MR 0139664
S. D. Berman and P. M. Gudivok, Indecomposable representations of finite groups over the ring of $p$-adic integers, Izv. Akad. Nauk SSSR Ser. Mat. 28 (1964), 875–910 (Russian). MR 0166273
S. D. Berman and A. I. Lihtman, On integral representations of finite nilpotent groups, Uspehi Mat. Nauk 20 (1965), 186–188 (Russian). MR 0207859
S. D. Berman and A. R. Rossa, Integral group-rings of finite and periodic groups, Algebra and Math. Logic: Studies in Algebra (Russian), Izdat. Kiev. Univ., Kiev, 1966, pp. 44–53 (Russian, with English summary). MR 0209367
31a. A. Bialnicki-Birula, On the equivalence of integral representations of finite groups, Proc. Amer. Math. Soc. (to appear).
Z. I. Borevi� and D. K. Faddeev, Theory of homology in groups. I, Vestnik Leningrad. Univ. 11 (1956), no. 7, 3–39 (Russian). MR 0080088
Z. I. Borevi� and D. K. Faddeev, Integral representations of quadratic rings, Vestnik Leningrad. Univ. 15 (1960), no. 19, 52–64 (Russian, with English summary). MR 0153707
Z. I. Borevi� and D. K. Faddeev, Representations of orders with cyclic index, Trudy Mat. Inst. Steklov 80 (1965), 51–65 (Russian). MR 0205980
Z. I. Borevi� and D. K. Faddeev, A remark on orders with a cyclic index, Dokl. Akad. Nauk SSSR 164 (1965), 727–728 (Russian). MR 0190187
36. N. Bourbaki, Algèbre commutative, Actualités Sci. Indust., no. 1293, Hermann, Paris, 1961. MR 30 #2027.
A. A. Bovdi, Periodic normal subgroups of the multiplicative group of a group ring, Sibirsk. Mat. Ž. 9 (1968), 495–498 (Russian). MR 0227268
38. J. O. Brooks, Classification of representation modules over quadratic orders, Ph.D. Thesis, University of Michigan, Ann Arbor, Mich., 1964.
Armand Brumer, Structure of hereditary orders, Bull. Amer. Math. Soc. 69 (1963), 721–724. MR 152565, DOI https://doi.org/10.1090/S0002-9904-1963-11002-2
Henri Cartan and Samuel Eilenberg, Homological algebra, Princeton University Press, Princeton, N. J., 1956. MR 0077480
41. C. Chevalley, L'arithmétique dans les algèbres de matrices, Actualités Sci. Indust., no. 323, Hermann, Paris, 1936.
James A. Cohn and Donald Livingstone, On groups of order $p^{3}$, Canadian J. Math. 15 (1963), 622–624. MR 153739, DOI https://doi.org/10.4153/CJM-1963-063-1
James A. Cohn and Donald Livingstone, On the structure of group algebras. I, Canadian J. Math. 17 (1965), 583–593. MR 179266, DOI https://doi.org/10.4153/CJM-1965-058-2
D. B. Coleman, Idempotents in group rings, Proc. Amer. Math. Soc. 17 (1966), 962. MR 193158, DOI https://doi.org/10.1090/S0002-9939-1966-0193158-3
S. B. Conlon, Structure in representation algebras, J. Algebra 5 (1967), 274–279. MR 202860, DOI https://doi.org/10.1016/0021-8693%2867%2990040-3
S. B. Conlon, Relative components of representations, J. Algebra 8 (1968), 478–501. MR 223427, DOI https://doi.org/10.1016/0021-8693%2868%2990056-2
S. B. Conlon, Decompositions induced from the Burnside algebra, J. Algebra 10 (1968), 102–122. MR 237664, DOI https://doi.org/10.1016/0021-8693%2868%2990107-5
S. B. Conlon, Monomial representations under integral similarity, J. Algebra 13 (1969), 496–508. MR 252527, DOI https://doi.org/10.1016/0021-8693%2869%2990111-2
Ian G. Connell, On the group ring, Canadian J. Math. 15 (1963), 650–685. MR 153705, DOI https://doi.org/10.4153/CJM-1963-067-0
Charles W. Curtis and Irving Reiner, Representation theory of finite groups and associative algebras, Pure and Applied Mathematics, Vol. XI, Interscience Publishers, a division of John Wiley & Sons, New York-London, 1962. MR 0144979
E. C. Dade, Rings, in which no fixed power of ideal classes becomes invertible. Note to the preceding paper of Dade, Taussky and Zassenhaus, Math. Ann. 148 (1962), 65–66. MR 140545, DOI https://doi.org/10.1007/BF01438390
E. C. Dade, Some indecomposable group representations, Ann. of Math. (2) 77 (1963), 406–412. MR 144981, DOI https://doi.org/10.2307/1970222
E. C. Dade, The maximal finite groups of $4\times 4$ integral matrices, Illinois J. Math. 9 (1965), 99–122. MR 170958
E. C. Dade, D. W. Robinson, O. Taussky, and M. Ward, Divisors of recurrent sequences, J. Reine Angew. Math. 214(215) (1964), 180–183. MR 161875, DOI https://doi.org/10.1515/crll.1964.214-215.180
E. C. Dade and O. Taussky, Some new results connected with matrices of rational integers, Proc. Sympos. Pure Math., Vol. VIII, Amer. Math. Soc., Providence, R.I., 1965, pp. 78–88. MR 0184924
E. C. Dade and O. Taussky, On the different in orders in an algebraic number field and special units connected with it, Acta Arith. 9 (1964), 47–51. MR 166183, DOI https://doi.org/10.4064/aa-9-1-47-51
E. C. Dade, O. Taussky, and H. Zassenhaus, On the semigroup of ideal classes in an order of an algebraic number field, Bull. Amer. Math. Soc. 67 (1961), 305–308. MR 136597, DOI https://doi.org/10.1090/S0002-9904-1961-10594-6
E. C. Dade, O. Taussky, and H. Zassenhaus, On the theory of orders, in paricular on the semigroup of ideal classes and genera of an order in an algebraic number field, Math. Ann. 148 (1962), 31–64. MR 140544, DOI https://doi.org/10.1007/BF01438389
54. K. deLeeuw, Some applications of cohomology to algebraic number theory and group representations (unpublished).
Frank R. DeMeyer, The trace map and separable algebras, Osaka Math. J. 3 (1966), 7–11. MR 228542
Max Deuring, Algebren, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 41, Springer-Verlag, Berlin-New York, 1968 (German). Zweite, korrigierte auflage. MR 0228526
Fritz-Erdmann Diederichsen, Über die Ausreduktion ganzzahliger Gruppendarstellungen bei arithmetischer Äquivalenz, Abh. Math. Sem. Hansischen Univ. 13 (1940), 357–412 (German). MR 2133
Andreas Dress, On the Krull-Schmidt theorem for integral group representations of rank $1$, Michigan Math. J. 17 (1970), 273–277. MR 263933
Andreas Dress, An intertwining number theorem for integral representations and applications, Math. Z. 116 (1970), 153–165. MR 267011, DOI https://doi.org/10.1007/BF01109959
Andreas Dress, On the decomposition of modules, Bull. Amer. Math. Soc. 75 (1969), 984–986. MR 244227, DOI https://doi.org/10.1090/S0002-9904-1969-12326-8
Andreas Dress, On integral representations, Bull. Amer. Math. Soc. 75 (1969), 1031–1034. MR 249466, DOI https://doi.org/10.1090/S0002-9904-1969-12349-9
Andreas Dress, The ring of monomial representations. I. Structure theory, J. Algebra 18 (1971), 137–157. MR 274607, DOI https://doi.org/10.1016/0021-8693%2871%2990132-3
Andreas Dress, On relative Grothendieck rings, Bull. Amer. Math. Soc. 75 (1969), 955–958. MR 244401, DOI https://doi.org/10.1090/S0002-9904-1969-12311-6
V. S. Drobotenko, Integral representations of primary abelian groups, Algebra and Math. Logic: Studies in Algebra (Russian), Izdat. Kiev. Univ., Kiev, 1966, pp. 111–121 (Russian, with English summary). MR 0204536
V. S. Drobotenko, È. S. Drobotenko, Z. P. Žilinskaja, and E. Ja. Pogoriljak, Representations of the cyclic group of prime order $p$ over the ring of residue classes ${\rm mod}\, p^{s}$, Ukrain. Mat. Ž. 17 (1965), no. 5, 28–42 (Russian). MR 0188304
67. V. S. Drobotenko and A. I. Lihtman, Representations of finite groups over the ring of residue classes mod p, Dokl. Užgorod Univ. 3 (1960), 63. (Russian)
P. M. Gudivok, V. S. Drobotenko, and A. I. Lihtman, On representations of finite groups over the ring of residue classes modulo $m$, Ukrain. Mat. Ž. 16 (1964), 82–89 (Russian). MR 0167538
V. S. Drobotenko and V. P. Rud′ko, Representations of a cyclic group by groups of automorphisms of a certain class of modules, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1968 (1968), 302–304 (Ukrainian, with Russian and English summaries). MR 0227288
Ju. A. Drozd, Representations of cubic $Z$-rings, Dokl. Akad. Nauk SSSR 174 (1967), 16–18 (Russian). MR 0215824
Ju. A. Drozd, The distribution of maximal sublattices, Mat. Zametki 6 (1969), 19–24 (Russian). MR 252434
Ju. A. Drozd, Adèles and integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 33 (1969), 1080–1088 (Russian). MR 0255595
Ju. A. Drozd and V. V. Kiri�enko, Representation of rings in a second order matrix algebra, Ukrain. Mat. Ž. 19 (1967), no. 3, 107–112 (Russian). MR 0210746
Ju. A. Drozd and V. V. Kiri�enko, Hereditary orders, Ukrain. Mat. Ž. 20 (1968), 246–248 (Russian). MR 0254095
Ju. A. Drozd, V. V. Kiri�enko, and A. V. Roĭter, Hereditary and Bass orders, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 1415–1436 (Russian). MR 0219527
Ju. A. Drozd and A. V. Roĭter, Commutative rings with a finite number of indecomposable integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 783–798 (Russian). MR 0220716
Ju. A. Drozd and V. M. Tur�in, The number of modules of representations in genus for integral matrix rings of the second order, Mat. Zametki 2 (1967), 133–138 (Russian). MR 229679
Klaus W. Roggenkamp and Verena Huber-Dyson, Lattices over orders. I, Lecture Notes in Mathematics, Vol. 115, Springer-Verlag, Berlin-New York, 1970. MR 0283013
M. Eichler, Über die Idealklassenzahl total definiter Quaternionenalgebren, Math. Z. 43 (1938), no. 1, 102–109 (German). MR 1545717, DOI https://doi.org/10.1007/BF01181088
M. Eichler, Über die Idealklassenzahl hyperkomplexer Systeme, Math. Z. 43 (1938), no. 1, 481–494 (German). MR 1545733, DOI https://doi.org/10.1007/BF01181104
D. K. Faddeev, On the semigroup of genera in the theory of integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 28 (1964), 475–478 (Russian). MR 0161885
D. K. Faddeev, An introduction to the multiplicative theory of modules of integral representations, Trudy Mat. Inst. Steklov 80 (1965), 145–182 (Russian). MR 0206048
D. K. Faddeev, On the theory of cubic $Z$-rings, Trudy Mat. Inst. Steklov. 80 (1965), 183–187 (Russian). MR 0195887
D. K. Faddeev, On the equivalence of systems of integral matrices, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 449–454 (Russian). MR 0194432
D. K. Faddeev, The number of classes of exact ideals for $Z$-rings, Mat. Zametki 1 (1967), 625–632 (Russian). MR 214617
Robert Fossum, The Noetherian different of projective orders, J. Reine Angew. Math. 224 (1966), 207–218. MR 222067, DOI https://doi.org/10.1515/crll.1966.224.207
Robert M. Fossum, Maximal orders over Krull domains, J. Algebra 10 (1968), 321–332. MR 233809, DOI https://doi.org/10.1016/0021-8693%2868%2990083-5
Albrecht Frölich, Ideals in an extension field as modules over the algebraic integers in a finite number field, Math. Z 74 (1960), 29–38. MR 0113877, DOI https://doi.org/10.1007/BF01180470
A. Fröhlich, The module structure of Kummer extensions over Dedekind domains, J. Reine Angew. Math. 209 (1962), 39–53. MR 160777, DOI https://doi.org/10.1515/crll.1962.209.39
A. Fröhlich, Invariants for modules over commutative separable orders, Quart. J. Math. Oxford Ser. (2) 16 (1965), 193–232. MR 210697, DOI https://doi.org/10.1093/qmath/16.3.193
A. Fröhlich, Resolvents, discriminants, and trace invariants, J. Algebra 4 (1966), 173–198. MR 207684, DOI https://doi.org/10.1016/0021-8693%2866%2990038-X
Italo Giorgiutti, Modules projectifs sur les algèbres de groupes finis, C. R. Acad. Sci. Paris 250 (1960), 1419–1420 (French). MR 124379
J. A. Green, On the indecomposable representations of a finite group, Math. Z. 70 (1958/59), 430–445. MR 131454, DOI https://doi.org/10.1007/BF01558601
J. A. Green, Blocks of modular representations, Math. Z. 79 (1962), 100–115. MR 141717, DOI https://doi.org/10.1007/BF01193108
J. A. Green, The modular representation algebra of a finite group, Illinois J. Math. 6 (1962), 607–619. MR 141709
J. A. Green, A transfer theorem for modular representations, J. Algebra 1 (1964), 73–84. MR 162843, DOI https://doi.org/10.1016/0021-8693%2864%2990009-2
K. W. Gruenberg, Residual properties of infinite soluble groups, Proc. London Math. Soc. (3) 7 (1957), 29–62. MR 87652, DOI https://doi.org/10.1112/plms/s3-7.1.29
95. P. M. Gudivok, Integral representations of a finite group with a noncyclic Sylow p-subgroup, Uspehi Mat. Nauk 16 (1961), 229-230. 96. P. M. Gudivok, Integral representations of groups of type (p, p), Dokl. Užgorod Univ. Ser. Phys.-Mat. Nauk (1962), no. 5, 73. 97. P. M. Gudivok, On p-adic integral representations of finite groups, Dokl. Užgorod Univ. Ser. Phys.-Mat. Nauk (1962), no. 5, 81-82.
P. M. Gudivok, Representations of finite groups over certain local rings, Dopovidi Akad. Nauk Ukraïn. RSR 1964 (1964), 173–176 (Ukrainian, with Russian and English summaries). MR 0166274
P. M. Gudivok, Representations of finite groups over quadratic rings, Dokl. Akad. Nauk SSSR 159 (1964), 1210–1213 (Russian). MR 0169931
P. M. Gudivok, Representations of finite groups over local number rings, Dopovīdī Akad. Nauk Ukraïn. RSR 1966 (1966), 979–981 (Ukrainian, with Russian and English summaries). MR 0201525
P. M. Gudivok, Representations of finite groups over number rings, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 799–834 (Russian). MR 0218468
P. M. Gudivok and V. P. Rud′ko, On $p$-adic integer-valued representations of a cyclic $p$-group, Dopovīdī Akad. Nauk Ukraïn. RSR 1966 (1966), 1111–1113 (Ukrainian, with Russian and English summaries). MR 0201527
T. A. Hannula, The integral representation ring $a(R_{k}G)$, Trans. Amer. Math. Soc. 133 (1968), 553–559. MR 241548, DOI https://doi.org/10.1090/S0002-9947-1968-0241548-9
Manabu Harada, Hereditary orders, Trans. Amer. Math. Soc. 107 (1963), 273–290. MR 151489, DOI https://doi.org/10.1090/S0002-9947-1963-0151489-9
Manabu Harada, Structure of hereditary orders over local rings, J. Math. Osaka City Univ. 14 (1963), 1–22. MR 168619
Manabu Harada, Hereditary orders in generalized quaternions $D_{\tau }$, J. Math. Osaka City Univ. 14 (1963), 71–81. MR 168620
Akira Hattori, Rank element of a projective module, Nagoya Math. J. 25 (1965), 113–120. MR 175950
Akira Hattori, Semisimple algebras over a commutative ring, J. Math. Soc. Japan 15 (1963), 404–419. MR 158903, DOI https://doi.org/10.2969/jmsj/01540404
Alex Heller, On group representations over a valuation ring, Proc. Nat. Acad. Sci. U.S.A. 47 (1961), 1194–1197. MR 125163, DOI https://doi.org/10.1073/pnas.47.8.1194
Alex Heller, Some exact sequences in algebraic $K$-theory, Topology 4 (1965), 389–408. MR 179229, DOI https://doi.org/10.1016/0040-9383%2865%2990004-2
A. Heller and I. Reiner, Indecomposable representations, Illinois J. Math. 5 (1961), 314–323. MR 122890
A. Heller and I. Reiner, Representations of cyclic groups in rings of integers. I, Ann. of Math. (2) 76 (1962), 73–92. MR 140575, DOI https://doi.org/10.2307/1970266
A. Heller and I. Reiner, On groups with finitely many indecomposable integral representations, Bull. Amer. Math. Soc. 68 (1962), 210–212. MR 137773, DOI https://doi.org/10.1090/S0002-9904-1962-10751-4
A. Heller and I. Reiner, Grothendieck groups of orders in semisimple algebras, Trans. Amer. Math. Soc. 112 (1964), 344–355. MR 161889, DOI https://doi.org/10.1090/S0002-9947-1964-0161889-X
A. Heller and I. Reiner, Grothendieck groups of integral group rings, Illinois J. Math. 9 (1965), 349–360. MR 175935
D. G. Higman, Indecomposable representations at characteristic $p$, Duke Math. J. 21 (1954), 377–381. MR 67896
D. G. Higman, Induced and produced modules, Canadian J. Math. 7 (1955), 490–508. MR 87671, DOI https://doi.org/10.4153/CJM-1955-052-4
D. G. Higman, On orders in separable algebras, Canadian J. Math. 7 (1955), 509–515. MR 88486, DOI https://doi.org/10.4153/CJM-1955-053-1
D. G. Higman, Relative cohomology, Canadian J. Math. 9 (1957), 19–34. MR 83486, DOI https://doi.org/10.4153/CJM-1957-004-4
D. G. Higman, On isomorphisms of orders, Michigan Math. J. 6 (1959), 255–257. MR 109174
D. G. Higman, On representations of orders over Dedekind domains, Canadian J. Math. 12 (1960), 107–125. MR 109175, DOI https://doi.org/10.4153/CJM-1960-010-1
D. G. Higman and J. E. McLaughlin, Finiteness of class numbers of representations of algebras over function fields, Michigan Math. J. 6 (1959), 401–404. MR 109151
Graham. Higman, The units of group-rings, Proc. London Math. Soc. (2) 46 (1940), 231–248. MR 2137, DOI https://doi.org/10.1112/plms/s2-46.1.231
Roger Holvoet, Sur l'isomorphie d'algèbres de groupes, Bull. Soc. Math. Belg. 20 (1968), 264–282 (French). MR 240219
124a. D. A. Jackson, On a problem in the theory of integral group rings, Ph.D. thesis, Oxford University, Oxford, 1967.
D. A. Jackson, The groups of units of the integral group rings of finite metabelian and finite nilpotent groups, Quart. J. Math. Oxford Ser. (2) 20 (1969), 319–331. MR 249521, DOI https://doi.org/10.1093/qmath/20.1.319
H. Jacobinski, Über die Hauptordnung eines Körpers als Gruppenmodul, J. Reine Angew. Math. 213 (1963/64), 151–164 (German). MR 163901, DOI https://doi.org/10.1515/crll.1964.213.151
H. Jacobinski, On extensions of lattices, Michigan Math. J. 13 (1966), 471–475. MR 204538
H. Jacobinski, Sur les ordres commutatifs avec un nombre fini de réseaux indécomposables, Acta Math. 118 (1967), 1–31 (French). MR 212001, DOI https://doi.org/10.1007/BF02392474
H. Jacobinski, Über die Geschlechter von Gittern über Ordnungen, J. Reine Angew. Math. 230 (1968), 29–39 (German). MR 229676, DOI https://doi.org/10.1515/crll.1968.230.29
H. Jacobinski, Genera and decompositions of lattices over orders, Acta Math. 121 (1968), 1–29. MR 251063, DOI https://doi.org/10.1007/BF02391907
H. Jacobinski, On embedding of lattices belonging to the same genus, Proc. Amer. Math. Soc. 24 (1970), 134–136. MR 251072, DOI https://doi.org/10.1090/S0002-9939-1970-0251072-X
Nathan Jacobson, The Theory of Rings, American Mathematical Society Mathematical Surveys, Vol. II, American Mathematical Society, New York, 1943. MR 0008601
N. Jacobson, Representation theory for Jordan rings, Proceedings of the International Congress of Mathematicians, Cambridge, Mass., 1950, vol. 2, Amer. Math. Soc., Providence, R. I., 1952, pp. 37–43. MR 0044505
W. E. Jenner, Block ideals and arithmetics of algebras, Compositio Math. 11 (1953), 187–203. MR 62723
W. E. Jenner, On the class number of non-maximal orders in ${\mathfrak p}$-adic division algebras, Math. Scand. 4 (1956), 125–128. MR 81270, DOI https://doi.org/10.7146/math.scand.a-10461
Alfredo Jones, Groups with a finite number of indecomposable integral representations, Michigan Math. J. 10 (1963), 257–261. MR 153737
Alfredo Jones, Integral representations of the direct product of groups, Canadian J. Math. 15 (1963), 625–630. MR 154927, DOI https://doi.org/10.4153/CJM-1963-064-9
Alfredo Jones, On representations of finite groups over valuation rings, Illinois J. Math. 9 (1965), 297–303. MR 175981
Irving Kaplansky, Elementary divisors and modules, Trans. Amer. Math. Soc. 66 (1949), 464–491. MR 31470, DOI https://doi.org/10.1090/S0002-9947-1949-0031470-3
Irving Kaplansky, Modules over Dedekind rings and valuation rings, Trans. Amer. Math. Soc. 72 (1952), 327–340. MR 46349, DOI https://doi.org/10.1090/S0002-9947-1952-0046349-0
Irving Kaplansky, Submodules of quaternion algebras, Proc. London Math. Soc. (3) 19 (1969), 219–232. MR 240142, DOI https://doi.org/10.1112/plms/s3-19.2.219
V. V. Kiri�enko, Orders whose representations are all completely reducible, Mat. Zametki 2 (1967), 139–144 (Russian). MR 219528
142. D. I. Knee, The indecomposable integral representations of finite cyclic groups, Ph.D. Thesis, M.I.T., Cambridge, Mass., 1962.
Martin Kneser, Einige Bemerkungen über ganzzahlige Darstellungen endlicher Gruppen, Arch. Math. (Basel) 17 (1966), 377–379 (German). MR 201526, DOI https://doi.org/10.1007/BF01899614
S. A. Krugljak, Precise ideals of integer matrix-rings of the second order, Ukrain. Mat. Ž. 18 (1966), no. 3, 58–64 (Russian). MR 0199229
S. A. Krugljak, The Grothendieck group, Ukrain. Mat. Ž. 18 (1966), no. 5, 100–105 (Russian). MR 0200305
Tsit-yuen Lam, Induction theorems for Grothendieck groups and Whitehead groups of finite groups, Ann. Sci. École Norm. Sup. (4) 1 (1968), 91–148. MR 231890
Richard G. Larson, Group rings over Dedekind domains, J. Algebra 5 (1967), 358–361. MR 209368, DOI https://doi.org/10.1016/0021-8693%2867%2990045-2
Claiborne G. Latimer and C. C. MacDuffee, A correspondence between classes of ideals and classes of matrices, Ann. of Math. (2) 34 (1933), no. 2, 313–316. MR 1503108, DOI https://doi.org/10.2307/1968204
149. W. J. Leahey, The classification of the indecomposable integral representations of the dihedral group of order 2p, Ph.D. Thesis, M.I.T., Cambridge, Mass., 1962.
Myrna Pike Lee, Integral representations of dihedral groups of order $2p$, Trans. Amer. Math. Soc. 110 (1964), 213–231. MR 156896, DOI https://doi.org/10.1090/S0002-9947-1964-0156896-7
Heinrich-Wolfgang Leopoldt, Über die Hauptordnung der ganzen Elemente eines abelschen Zahlkörpers, J. Reine Angew. Math. 201 (1959), 119–149 (German). MR 108479, DOI https://doi.org/10.1515/crll.1959.201.119
Lawrence S. Levy, Decomposing pairs of modules, Trans. Amer. Math. Soc. 122 (1966), 64–80. MR 194467, DOI https://doi.org/10.1090/S0002-9947-1966-0194467-9
George W. Mackey, On induced representations of groups, Amer. J. Math. 73 (1951), 576–592. MR 42420, DOI https://doi.org/10.2307/2372309
Jean-Marie Maranda, On $\mathfrak B$-adic integral representations of finite groups, Canad. J. Math. 5 (1953), 344–355. MR 56605, DOI https://doi.org/10.4153/cjm-1953-040-2
J.-M. Maranda, On the equivalence of representations of finite groups by groups of automorphisms of modules over Dedekind rings, Canadian J. Math. 7 (1955), 516–526. MR 88498, DOI https://doi.org/10.4153/CJM-1955-054-9
Jacques Martinet, Sur l'arithmétique des extensions galoisiennes à groupe de Galois diédral d'ordre $2p$, Ann. Inst. Fourier (Grenoble) 19 (1969), no. fasc. 1, 1–80, ix (French, with English summary). MR 262210
A. Matuljauskas, Integral representations of a fourth-order cyclic group, Litovsk. Mat. Sb. 2 (1962), no. 1, 75–82 (Russian, with Lithuanian and German summaries). MR 0148768
A. Matuljauskas, Integral representations of the cyclic group of order six, Litovsk. Mat. Sb. 2 (1962), no. 2, 149–157 (Russian, with Lithuanian and German summaries). MR 0155902
A. Matuljauskas, On the number of indecomposable representations of the group $Z_{8}$, Litovsk. Mat. Sb. 3 (1963), no. 1, 181–188 (Russian, with Lithuanian and German summaries). MR 0165018
A. Matuljauskas and M. Matuljauskene, On integral representations of a group of type $(3,\,3)$, Litovsk. Mat. Sb. 4 (1964), 229–233 (Russian, with Lithuanian and German summaries). MR 0167540
Warren May, Commutative group algebras, Trans. Amer. Math. Soc. 136 (1969), 139–149. MR 233903, DOI https://doi.org/10.1090/S0002-9947-1969-0233903-9
G. O. Michler, Structure of semi-perfect hereditary Noetherian rings, J. Algebra 13 (1969), 327–344. MR 246918, DOI https://doi.org/10.1016/0021-8693%2869%2990078-7
Tadasi Nakayama, A theorem on modules of trivial cohomology over a finite group, Proc. Japan Acad. 32 (1956), 373–376. MR 80098
Tadasi Nakayama, On modules of trivial cohomology over a finite group, Illinois J. Math. 1 (1957), 36–43. MR 84014
Tadasi Nakayama, On modules of trivial cohomology over a finite group. II. Finitely generated modules, Nagoya Math. J. 12 (1957), 171–176. MR 98125
L. A. Nazarova, Unimodular representations of the four group, Dokl. Akad. Nauk SSSR 140 (1961), 1101–1014 (Russian). MR 0130916
L. A. Nazarova, Unimodular representations of the alternating group of degree four, Ukrain. Mat. Ž. 15 (1963), 437–444 (Russian). MR 0158926
L. A. Nazarova, Representations of a tetrad, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 1361–1378 (Russian). MR 0223352
L. A. Nazarova and A. V. Roĭter, Integral representations of a symmetric group of third degree, Ukrain. Mat. Ž. 14 (1962), 271–288 (Russian, with English summary). MR 0148767
168. L. A. Nazarova and A. V. Roĭter, On irreducible representations of p-groups over Z, Ukrain. Mat. Ž. 18 (1966), no 1, 119-124. (Russian) MR 34 #254.
L. A. Nazarova and A. V. Roĭter, Integral $p$-adic representations and representations over a ring of residue classes, Ukrain. Mat. Ž. 19 (1967), no. 2, 125–126 (Russian). MR 0209369
L. A. Nazarova and A. V. Roĭter, A sharpening of a theorem of Bass, Dokl. Akad. Nauk SSSR 176 (1967), 266–268 (Russian). MR 0225810
L. A. Nazarova and A. V. Roĭter, Finitely generated modules over a dyad of two local Dedekind rings, and finite groups which possess an abelian normal divisor of index $p$, Izv. Akad. Nauk SSSR Ser. Mat. 33 (1969), 65–89 (Russian). MR 0260859
Morris Newman and Olga Taussky, Classes of positive definite unimodular circulants, Canadian J. Math. 9 (1957), 71–73. MR 82947, DOI https://doi.org/10.4153/CJM-1957-010-5
M. Newman and Olga Taussky, On a generalization of the normal basis in abelian algebraic number fields, Comm. Pure Appl. Math. 9 (1956), 85–91. MR 75985, DOI https://doi.org/10.1002/cpa.3160090106
R. J. Nunke, Modules of extensions over Dedekind rings, Illinois J. Math. 3 (1959), 222–241. MR 102538
Tadao Obayashi, On the Grothendieck ring of an abelian $p$-group, Nagoya Math. J. 26 (1966), 101–113. MR 225847
176. J. Oppenheim, Integral representations of cyclic groups of squarefree order, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1962.
D. S. Passman, Nil ideals in group rings, Michigan Math. J. 9 (1962), 375–384. MR 144930
D. S. Passman, Isomorphic groups and group rings, Pacific J. Math. 15 (1965), 561–583. MR 193160
Lena Chang Pu, Integral representations of non-abelian groups of order $pq$, Michigan Math. J. 12 (1965), 231–246. MR 178063
T. Ralley, Decomposition of products of modular representations, J. London Math. Soc. 44 (1969), 480–484. MR 240220, DOI https://doi.org/10.1112/jlms/s1-44.1.480
Irving Reiner, Maschke modules over Dedekind rings, Canadian J. Math. 8 (1956), 329–334. MR 78969, DOI https://doi.org/10.4153/CJM-1956-037-3
Irving Reiner, Integral representations of cyclic groups of prime order, Proc. Amer. Math. Soc. 8 (1957), 142–146. MR 83493, DOI https://doi.org/10.1090/S0002-9939-1957-0083493-6
Irving Reiner, On the class number of representations of an order, Canadian J. Math. 11 (1959), 660–672. MR 108513, DOI https://doi.org/10.4153/CJM-1959-061-5
Irving Reiner, The nonuniqueness of irreducible constituents of integral group representations, Proc. Amer. Math. Soc. 11 (1960), 655–658. MR 122891, DOI https://doi.org/10.1090/S0002-9939-1960-0122891-9
Irving Reiner, Behavior of integral group representations under ground ring extension, Illinois J. Math. 4 (1960), 640–651. MR 121407
Irving Reiner, The Krull-Schmidt theorem for integral group representations, Bull. Amer. Math. Soc. 67 (1961), 365–367. MR 138689, DOI https://doi.org/10.1090/S0002-9904-1961-10619-8
Irving Reiner, Indecomposable representations of non-cyclic groups, Michigan Math. J. 9 (1962), 187–191. MR 140576
Irving Reiner, Failure of the Krull-Schmidt theorem for integral representations, Michigan Math. J. 9 (1962), 225–231. MR 144942
Irving Reiner, Extensions of irreducible modules, Michigan Math. J. 10 (1963), 273–276. MR 155874
Irving Reiner, The integral representation ring of a finite group, Michigan Math. J. 12 (1965), 11–22. MR 172937
Irving Reiner, Nilpotent elements in rings of integral representations, Proc. Amer. Math. Soc. 17 (1966), 270–274. MR 188306, DOI https://doi.org/10.1090/S0002-9939-1966-0188306-5
Irving Reiner, Integral represetation algebras, Trans. Amer. Math. Soc. 124 (1966), 111–121. MR 202863, DOI https://doi.org/10.1090/S0002-9947-1966-0202863-6
Irving Reiner, Relations between integral and modular representations, Michigan Math. J. 13 (1966), 357–372. MR 222188
Irving Reiner, Module extensions and blocks, J. Algebra 5 (1967), 157–163. MR 213452, DOI https://doi.org/10.1016/0021-8693%2867%2990032-4
Irving Reiner, Representation rings, Michigan Math. J. 14 (1967), 385–391. MR 218469
I. Raĭner, The action of an involution in $\~Ku^0 (ZG)$, Mat. Zametki 3 (1968), 523–527 (Russian). MR 229696
197. I. Reiner, A survey of integral representation theory, Proc. Algebra Sympos., University of Kentucky (Lexington, 1968), pp. 8-14. 198. I. Reiner, Maximal orders, Mimeograph Notes, University of Illinois, Urbana, Ill., 1969.
I. Reiner and H. Zassenhaus, Equivalence of representations under extensions of local ground rings, Illinois J. Math. 5 (1961), 409–411. MR 126468
Dock Sang Rim, Modules over finite groups, Ann. of Math. (2) 69 (1959), 700–712. MR 104721, DOI https://doi.org/10.2307/1970033
Dock Sang Rim, On projective class groups, Trans. Amer. Math. Soc. 98 (1961), 459–467. MR 124378, DOI https://doi.org/10.1090/S0002-9947-1961-0124378-1
Klaus W. Roggenkamp, Gruppenringe von unendlichem Darstellungstyp, Math. Z. 96 (1967), 393–398 (German). MR 206123, DOI https://doi.org/10.1007/BF01117098
Klaus W. Roggenkamp, Darstellungen endlicher Gruppen in Polynomringen, Math. Z. 96 (1967), 399–407 (German). MR 206124, DOI https://doi.org/10.1007/BF01117099
Klaus W. Roggenkamp, Grothendieck groups of hereditary orders, J. Reine Angew. Math. 235 (1969), 29–40. MR 254101, DOI https://doi.org/10.1515/crll.1969.235.29
Klaus W. Roggenkamp, On the irreducible lattices of orders, Canadian J. Math. 21 (1969), 970–976. MR 248247, DOI https://doi.org/10.4153/CJM-1969-106-x
206. K. W. Roggenkamp, Das Krull-Schmidt Theorem für projektive Gitter in Ordnungen über lokalen Ringen, Math. Seminar (Giessen, 1969).
K. W. Roggenkamp, Projective modules over clean orders, Compositio Math. 21 (1969), 185–194. MR 248170
K. W. Roggenkamp, A necessary and sufficient condition for orders in direct sums of complete skewfields to have only finitely many nonisomorphic indecomposable integral representations, Bull. Amer. Math. Soc. 76 (1970), 130–134. MR 284466, DOI https://doi.org/10.1090/S0002-9904-1970-12398-9
Klaus W. Roggenkamp, Projective homomorphisms and extensions of lattices, J. Reine Angew. Math. 246 (1971), 41–45. MR 274485, DOI https://doi.org/10.1515/crll.1971.246.41
A. V. Roĭter, On the representations of the cyclic group of fourth order by integral matrices, Vestnik Leningrad. Univ. 15 (1960), no. 19, 65–74 (Russian, with English summary). MR 0124418
A. V. Roĭter, Categories with division and integral representations, Soviet Math. Dokl. 4 (1963), 1621–1623. MR 0194494
A. V. Roĭter, On a category of representations, Ukrain. Mat. Ž. 15 (1963), 448–452 (Russian). MR 0159856
A. V. Roĭter, Integer-valued representations belonging to one genus, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 1315–1324 (Russian). MR 0213391
A. V. Roĭter, Divisibility in the category of representations over a complete local Dedekind ring, Ukrain. Mat. Ž. 17 (1965), no. 4, 124–129 (Russian). MR 0197534
A. V. Roĭter, $E$-systems of representations, Ukrain. Mat. Ž. 17 (1965), no. 2, 88–96 (Russian). MR 0190206
A. V. Roĭter, An analog of the theorem of Bass for modules of representations of noncommutative orders, Dokl. Akad. Nauk SSSR 168 (1966), 1261–1264 (Russian). MR 0202772
A. V. Roĭter, Unboundedness of the dimensions of the indecomposable representations of an algebra which has infinitely many indecomposable representations, Izv. Akad. Nauk SSSR Ser. Mat. 32 (1968), 1275–1282 (Russian). MR 0238893
A. V. Roĭter, On the theory of integral representations of rings, Mat. Zametki 3 (1968), 361–366 (Russian). MR 231859
Joseph J. Rotman, Notes on homological algebras, Van Nostrand Reinhold Co., New York-Toronto, Ont.-London, 1970. Van Nostrand Reinhold Mathematical Studies, No. 26. MR 0409590
V. P. Rud′ko, Tensor algebra of integral representations of a cyclic group of order $p^{2}$, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1967 (1967), 35–39 (Ukrainian, with Russian and English summaries). MR 0209370
A. I. Saksonov, On group rings of finite $p$-groups over certain integral domains, Dokl. Akad. Nauk BSSR 11 (1967), 204–207 (Russian). MR 0209372
A. I. Saksonov, Group-algebras of finite groups over a number field, Dokl. Akad. Nauk BSSR 11 (1967), 302–305 (Russian). MR 0210795
O. F. G. Schilling, The Theory of Valuations, Mathematical Surveys, No. 4, American Mathematical Society, New York, N. Y., 1950. MR 0043776
Hans Schneider and Julian Weissglass, Group rings, semigroup rings and their radicals, J. Algebra 5 (1967), 1–15. MR 213453, DOI https://doi.org/10.1016/0021-8693%2867%2990021-X
Sudarshan K. Sehgal, On the isomorphism of integral group rings. I, Canadian J. Math. 21 (1969), 410–413. MR 255706, DOI https://doi.org/10.4153/CJM-1969-044-9
C. S. Seshadri, Triviality of vector bundles over the affine space $K^{2}$, Proc. Nat. Acad. Sci. U.S.A. 44 (1958), 456–458. MR 102527, DOI https://doi.org/10.1073/pnas.44.5.456
C. S. Seshadri, Algebraic vector bundles over the product of an affine curve and the affine line, Proc. Amer. Math. Soc. 10 (1959), 670–673. MR 164972, DOI https://doi.org/10.1090/S0002-9939-1959-0164972-1
Michael Singer, Invertible powers of ideals over orders in commutative separable algebras, Proc. Cambridge Philos. Soc. 67 (1970), 237–242. MR 252378, DOI https://doi.org/10.1017/s0305004100045503
D. L. Stancl, Multiplication in Grothendieck rings of integral group rings, J. Algebra 7 (1967), 77–90. MR 223428, DOI https://doi.org/10.1016/0021-8693%2867%2990068-3
228. E. Steinitz, Rechteckige Systeme und Moduln in algebraischen Zahlenkörpern. I, II, Math. Ann. 71 (1911), 328-354; 72 (1912), 297-345.
Jan Rustom Strooker, Faithfully projective modules and clean algebras, J. J. Groen & Zoon, N.V., Leiden, 1965. Dissertation, University of Utrecht, Utrecht, 1965. MR 0217115
Richard G. Swan, Projective modules over finite groups, Bull. Amer. Math. Soc. 65 (1959), 365–367. MR 114842, DOI https://doi.org/10.1090/S0002-9904-1959-10376-1
Richard G. Swan, The $p$-period of a finite group, Illinois J. Math. 4 (1960), 341–346. MR 122856
Richard G. Swan, Induced representations and projective modules, Ann. of Math. (2) 71 (1960), 552–578. MR 138688, DOI https://doi.org/10.2307/1969944
Richard G. Swan, Projective modules over group rings and maximal orders, Ann. of Math. (2) 76 (1962), 55–61. MR 139635, DOI https://doi.org/10.2307/1970264
Richard G. Swan, The Grothendieck ring of a finite group, Topology 2 (1963), 85–110. MR 153722, DOI https://doi.org/10.1016/0040-9383%2863%2990025-9
R. G. Swan, Algebraic $K$-theory, Lecture Notes in Mathematics, No. 76, Springer-Verlag, Berlin-New York, 1968. MR 0245634
Richard G. Swan, Invariant rational functions and a problem of Steenrod, Invent. Math. 7 (1969), 148–158. MR 244215, DOI https://doi.org/10.1007/BF01389798
Richard G. Swan, The number of generators of a module, Math. Z. 102 (1967), 318–322. MR 218347, DOI https://doi.org/10.1007/BF01110912
Shuichi Takahashi, Arithmetic of group representations, Tohoku Math. J. (2) 11 (1959), 216–246. MR 109848, DOI https://doi.org/10.2748/tmj/1178244583
Shuichi Takahashi, A characterization of group rings as a special class of Hopf algebras, Canad. Math. Bull. 8 (1965), 465–475. MR 184988, DOI https://doi.org/10.4153/CMB-1965-033-5
Olga Taussky, On a theorem of Latimer and MacDuffee, Canad. J. Math. 1 (1949), 300–302. MR 30491, DOI https://doi.org/10.4153/cjm-1949-026-1
Olga Taussky, Classes of matrices and quadratic fields, Pacific J. Math. 1 (1951), 127–132. MR 43064
Olga Taussky, Classes of matrices and quadratic fields. II, J. London Math. Soc. 27 (1952), 237–239. MR 46335, DOI https://doi.org/10.1112/jlms/s1-27.2.237
Olga Taussky, Unimodular integral circulants, Math. Z. 63 (1955), 286–289. MR 72890, DOI https://doi.org/10.1007/BF01187938
Olga Taussky, On matrix classes corresponding to an ideal and its inverse, Illinois J. Math. 1 (1957), 108–113. MR 94326
Olga Taussky, Matrices of rational integers, Bull. Amer. Math. Soc. 66 (1960), 327–345. MR 120237, DOI https://doi.org/10.1090/S0002-9904-1960-10439-9
Olga Taussky, Ideal matrices. I, Arch. Math. 13 (1962), 275–282. MR 150165, DOI https://doi.org/10.1007/BF01650074
Olga Taussky, Ideal matrices. II, Math. Ann. 150 (1963), 218–225. MR 156862, DOI https://doi.org/10.1007/BF01396991
Olga Taussky, On the similarity transformation between an integral matrix with irreducible characteristic polynomial and its transpose, Math. Ann. 166 (1966), 60–63. MR 199206, DOI https://doi.org/10.1007/BF01361438
Olga Taussky, The discriminant matrices of an algebraic number field, J. London Math. Soc. 43 (1968), 152–154. MR 228473, DOI https://doi.org/10.1112/jlms/s1-43.1.152
Olga Taussky and John Todd, Matrices with finite period, Proc. Edinburgh Math. Soc. (2) 6 (1940), 128–134. MR 2829, DOI https://doi.org/10.1017/s0013091500024627
Olga Taussky and John Todd, Matrices of finite period, Proc. Roy. Irish Acad. Sect. A 46 (1941), 113–121. MR 0003607
Olga Taussky and Hans Zassenhaus, On the similarity transformation between a matrix and its transpose, Pacific J. Math. 9 (1959), 893–896. MR 108500
John G. Thompson, Vertices and sources, J. Algebra 6 (1967), 1–6. MR 207863, DOI https://doi.org/10.1016/0021-8693%2867%2990009-9
254. A. Troy, Integral representations of cyclic groups of order p, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1961.
Kôji Uchida, Remarks on Grothendieck rings, Tohoku Math. J. (2) 19 (1967), 341–348. MR 227253, DOI https://doi.org/10.2748/tmj/1178243284
S. Ullom, Normal bases in Galois extensions of number fields, Nagoya Math. J. 34 (1969), 153–167. MR 240082
S. Ullom, Galois cohomology of ambiguous ideals, J. Number Theory 1 (1969), 11–15. MR 237473, DOI https://doi.org/10.1016/0022-314X%2869%2990022-5
Yutaka Watanabe, The Dedekind different and the homological different, Osaka Math. J. 4 (1967), 227–231. MR 227210
André Weil, Basic number theory, Die Grundlehren der mathematischen Wissenschaften, Band 144, Springer-Verlag New York, Inc., New York, 1967. MR 0234930
261. A. R. Whitcomb, The group ring problem, Ph.D. thesis, University of Chicago, Chicago, Ill., 1968.
Oscar Zariski and Pierre Samuel, Commutative algebra, Volume I, The University Series in Higher Mathematics, D. Van Nostrand Company, Inc., Princeton, New Jersey, 1958. With the cooperation of I. S. Cohen. MR 0090581
263. H. Zassenhaus, Neuer Beweis der Endlichkeit der Klassenzahl bei unimodularer Aquivalenz endlicher ganzzahliger Substitutionsgruppen, Abh. Math. Sem. Univ. Hamburg 12 (1938), 276-288.
Hans Zassenhaus, Über die Äquivalenz ganzzahliger Darstellungen, Nachr. Akad. Wiss. Göttingen Math.-Phys. Kl. II 1967 (1967), 167–193 (German). MR 230759
Janice Zemanek, On the semisimplicity of integral representation rings, Bull. Amer. Math. Soc. 76 (1970), 778–779. MR 269757, DOI https://doi.org/10.1090/S0002-9904-1970-12547-2
1. M. Auslander and O. Goldman, Maximal orders, Trans. Amer. Math. Soc. 97 (1960), 1-24. MR 22 #8034. 2. M. Auslander and O. Goldman, The Brauer group of a commutative ring, Trans. Amer. Math. Soc. 97 (1960), 367-409. MR 22 #12130. 3. G. Azumaya, Corrections and supplementaries to my paper concerning Krull-Remak-Schmidt's theorem, Nagoya Math. J. 1 (1950), 117-124. MR 12, 314. 4. D. Ballew, The module index, projective modules and invertible ideals, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1969. 4a. D. Ballew, The module index and invertible ideals, Trans. Amer. Math. Soc. 148 (1970), (to appear). 5. B. Banaschewski, Integral group rings of finite groups, Canad. Math. Bull. 10 (1967), 635-642. MR 38 #1187. 6. L. F. Barannik and P. M. Gudivok, Projective representations of finite groups over rings, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1968, 294-297. (Ukrainian) MR 37 #4177. 7. L. F. Barannik and P. M. Gudivok, On indecomposable projective representations of finite groups, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1969, 391-393. (Ukrainian) 8. H. Bass, Finitistic dimension and a homological generalization of semi-primary rings, Trans. Amer. Math. Soc. 95 (1960), 466-488. MR 28 #1212, 9. H. Bass, Projective modules over algebras, Ann. of Math. (2) 73 (1961), 532-542. MR 31 #1278. 10. H. Bass, Torsion free and projective modules, Trans. Amer. Math. Soc. 102 (1962), 319-327. MR 25 #3960. 11. H. Bass, On the ubiquity of Gorenstein rings, Math. Z. 82 (1963), 8-28. MR 27 #3669. 12. H. Bass, K-theory and stable algebra, Inst. Hautes Études Sci. Publ. Math. No. 22 (1964), 5-60. MR 30 #4805. 13. H. Bass, The Dirichlet unit theorem, induced characters, and Whitehead groups of finite groups, Topology 4 (1966), 391-410. MR 33 #1341. 14. H. Bass, Algebraic K-theory, Math. Lecture Note Series, Benjamin, New York, 1968. 15. E. A. Bender, Classes of matrices and quadratic fields, Linear Algebra and Appl. 1 (1968), 195-201. MR 37 #6301. 16. S. D. Berman, On certain properties of integral group rings, Dokl. Akad. Nauk SSSR 91 (1953), 7-9. (Russian) MR 15, 99. 17. S. D. Berman, On isomorphism of the centers of group rings of p-groups, Dokl. Akad. Nauk SSSR 91 (1953), 185-187. (Russian) MR 15, 99. 18. S. D. Berman, On a necessary condition for isomorphism of integral group rings, Dopovīdī Akad. Nauk Ukraïn. RSR 1953, 313-316. (Ukrainian) MR 15, 599. 19. S. D. Berman, On the equation x, Ukrain. Mat. Ž. 7 (1955), 253-261. (Russian) MR 17, 1048. 20. S. D. Berman, On certain properties of group rings over the field of rational numbers, Užgorod. Gos. Univ. Naučn. Zap. Him. Fiz. Mat. 12 (1955), 88-110. (Russian) MR 20 #3920. 21. S. D. Berman, On automorphisms of the center of an integral group ring, Užgorod. Gos. Univ. Naučn. Zap. Him. Fiz. Mat. (1960), no. 3, 55. (Russian) 22. S. D. Berman, Integral representations of finite groups, Dokl. Akad. Nauk SSSR 152 (1963), 1286-1287 = Soviet Math. Dokl. 4 (1963), 1533-1535. MR 27 #4854. 23. S. D. Berman, On the theory of integral representations of finite groups, Dokl. Akad. Nauk SSSR 157 (1964), 506-508 = Soviet Math. Dokl. 5 (1964), 954-956. MR 29 #2308. 24. S. D. Berman, Integral representations of a cyclic group containing two irreducible rational components, In Memoriam: N. G. čebotarev, Izdat. Kazan Univ., Kazan, 1964, pp. 18-29. (Russian) MR 33 #4154. 25. S. D. Berman, On integral monomial representations of finite groups, Uspehi Mat. Nauk 20 (1965), no. 4 (124), 133-134. (Russian) MR 33 #4155. 26. S. D. Berman, Representations of finite groups over an arbitrary field and over rings of integers, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 69-132; English transl., Amer. Math. Soc. Transl. (2) 64 (1967), 147-215. MR 33 #5747. 27. S. D. Berman and P. M. Gudivok, Integral representations of finite groups, Dokl. Akad. Nauk SSSR 145 (1962), 1199-1201 =Soviet Math. Dokl. 3 (1962), 1172-1174. MR 25 #3095. 28. S. D. Berman and P. M. Gudivok, Integral representations of finite groups, Užgorod. Gos. Univ. Naučn. Zap. Him. Fiz. Mat. (1962), no. 5, 74-76. (Russian) 29. S. D. Berman and P. M. Gudivok, Indecomposable representations of finite groups over the ring of p-adic integers, Izv. Akad. Nauk SSSR Ser. Mat. 28 (1964), 875-910; English transl., Amer. Math. Soc. Transl. (2) 50 (1966), 77-113. MR 29 #3550. 30. S. D. Berman and A. I. Lihtman, On integral representations of finite nilpotent groups, Uspehi Mat. Nauk 20 (1965), no. 5 (125), 186-188. (Russian) MR 34 #7673. 31. S. D. Berman and A. R. Rossa, On integral group-rings of finite and periodic groups, Algebra and Math. Logic: Studies in Algebra, Izdat. Kiev Univ., Kiev, 1966, pp. 44-53. (Russian) MR 35 #265. 31a. A. Bialnicki-Birula, On the equivalence of integral representations of finite groups, Proc. Amer. Math. Soc. (to appear). 32. Z. I. Borevič and D. K. Faddeev, Theory of homology in groups. I, II, Vestnik Leningrad. Univ. 11 (1956), no. 7, 3-39; 14 (1959), no. 7, 72-87. (Russian) MR 18, 188; MR 21 #4968. 33. Z. I. Borevič and D. K. Faddeev, Integral representations of quadratic rings, Vestnik Leningrad. Univ. 15 (1960), no. 19, 52-64. (Russian) MR 27 #3668. 34. Z. I. Borevič and D. K. Faddeev, Representations of orders with cyclic index, Trudy Mat. Inst. Steklov. 80 (1965), 51-65. Proc. Steklov Inst. Math. 80 (1965), 56-72. MR 34 #5805. 35. Z. I. Borevič and D. K. Faddeev, Remarks on orders with a cyclic index, Dokl. Akad. Nauk SSSR 164 (1965), 727-728 = Soviet Math. Dokl. 6 (1965), 1273-1274. MR 32 #7601. 36. N. Bourbaki, Algèbre commutative, Actualités Sci. Indust., no. 1293, Hermann, Paris, 1961. MR 30 #2027. 37. A. A. Bovdi, Periodic normal divisors of the multiplicative group of a group ring, Sibirsk Mat. Ž. 9 (1968), 495-498 = Siberian Math. J. 9 (1968), 374-376. MR 37 #2853. 38. J. O. Brooks, Classification of representation modules over quadratic orders, Ph.D. Thesis, University of Michigan, Ann Arbor, Mich., 1964. 39. A. Brumer, Structure of hereditary orders, Bull. Amer. Math. Soc. 69 (1963), 721-724; Addendum, ibid. 70 (1964), 185. MR 27 #2543. 40. H. Cartan and S. Eilenberg, Homological algebra, Princeton Univ. Press, Princeton, N. J., 1956. MR 17, 1040. 41. C. Chevalley, L'arithmétique dans les algèbres de matrices, Actualités Sci. Indust., no. 323, Hermann, Paris, 1936. 42. J. A. Cohn and D. Livingstone, On groups of order p, Canad. J. Math. 15 (1963), 622-624. MR 27 #3700. 43. J. A. Cohn and D. Livingstone, On the structure of group algebras, Canad. J. Math. 17 (1965), 583-593. MR 31 #3514. 44. D. B. Coleman, Idempotents in group rings, Proc. Amer. Math. Soc. 17 (1966), 962. MR 33 #1379. 44a. S. B. Conlon, Structure in representation algebras, J. Algebra 5 (1967), 274-279. MR 34 #2719. 44b. S. B. Conlon, Relative components of representations, J. Algebra 8 (1968), 478-501. 44c. S. B. Conlon, Decompositions induced from the Burnside algebra, J. Algebra 10 (1968), 102-122. MR 38 #5945. 44d. S. B. Conlon, Monomial representations under integral similarity, J. Algebra 13 (1969), 496-508. 45. I. G. Connell, On the group ring, Canad. J. Math. 15 (1963), 650-685. MR 27 #3666. 46. C. W. Curtis and I. Reiner, Representation theory of finite groups and associative algebras, Pure and Appl. Math., vol. XI, Interscience, New York, 1962; 2nd ed., 1966. MR 26 #2519. 47. E. C. Dade, Rings in which no fixed power of ideal classes becomes invertible, Math. Ann. 148 (1962), 65-66. MR 25 #3963. 48. E. C. Dade, Some indecomposable group representations, Ann. of Math. (2) 77 (1963), 406-412. MR 26 #2521. 49. E. C. Dade, The maximal finite groups of 4X4 integral matrices, Illinois J. Math. 9 (1965), 99-122. MR 30 #1192. 50. E. C. Dade, D. W. Robinson, O. Taussky, and M. Ward, Divisors of recurrent sequences, J. Reine Angew. Math. 214/215 (1964), 180-183. MR 28 #5079. 51. E. C. Dade and O. Taussky, Some new results connected with matrices of rational integers, Proc. Sympos. Pure Math., vol. 8, Amer. Math. Soc. Providence, R. I., 1965, pp. 78-88. MR 32 #2395. 51a. E. C. Dade and O. Taussky, On the different in orders in an algebraic number field and special units connected with it, Acta Arith. 9 (1964), 47-51. 52. E. C. Dade, O. Taussky and H. Zassenhaus, On the semigroup of ideal classes in an order of an algebraic number field, Bull. Amer. Math. Soc. 67 (1961), 305-308. MR 25 #65. 53. E. C. Dade, O. Taussky and H. Zassenhaus, On the theory of orders, in particular on the semigroup of ideal classes and genera of an order in an algebraic number field, Math. Ann. 148 (1962), 31-64. MR 25 #3962. 54. K. deLeeuw, Some applications of cohomology to algebraic number theory and group representations (unpublished). 55. F. R. DeMeyer, The trace map and separable algebras, Osaka J. Math. 3 (1966), 7-11. MR 37 #4122. 56. M. Deuring, Algebren, Springer-Verlag, Berlin, 1935; rev. ed., Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 41, 1968. MR 37 #4106. 57. F. E. Diederichsen, Über die Ausreduktion ganzzahliger Gruppendarstellungen bei arithmetischer Aquivalenz, Abh. Math. Sem. Univ. Hamburg 14 (1938), 357-412. 58. A. Dress, A remark on the Krull-Schmidt theorem for integral group representations of rank 1 (to appear). 59. A. Dress, An intertwining number theorem for integral representations and applications (to appear). 60. A. Dress, On the decomposition of modules, Bull. Amer. Math. Soc. 75 (1969), 984-986. 61. A. Dress, On integral representations, Bull. Amer. Math. Soc. 75 (1969), 1031-1034. 62. A. Dress, The ring of monomial representations, I: Structure Theory (to appear). 63. A. Dress, Vertices of integral representations, Math. Z. (to appear). 64. A. Dress, On relative Grothendieck rings, Bull. Amer. Math. Soc. 75 (1969), 955-958. 65. V. S. Drobotenko, Integral representations of primary abelian groups, Algebra and Math. Logic: Studies in Algebra, Izdat. Kiev. Univ., Kiev, 1966, pp. 111-121. (Russian) MR 34 #4375. 66. V. S. Drobotenko, E. S. Drobotenko, Z. P. Žilinskaja and E. Y. Pogoriljak, Representations of the cyclic group of prime order p over residue classes mod p, Ukrain. Mat. Ž. 17 (1965), no. 5, 28-42; English transl., Amer. Math. Soc. Transl. (2) 69 (1968), 241-256. MR 32 #5743. 67. V. S. Drobotenko and A. I. Lihtman, Representations of finite groups over the ring of residue classes mod p, Dokl. Užgorod Univ. 3 (1960), 63. (Russian) 68. V. S. Drobotenko, P. M. Gudivok and A. I. Lihtman, On representations of finite groups over the ring of residue classes mod m, Ukrain. Mat. Ž. 16 (1964), 82-89. (Russian) MR 29 #4810. 69. V. S. Drobotenko and V. P. Rud'ko, Representations of a cyclic group by groups of automorphisms of a certain class of modules, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A. 1968, 302-304. (Ukrainian) MR 37 #2873. 70. Ju. A. Drozd, Representations of cubic Z-rings, Dokl. Akad. Nauk SSSR 174 (1967), 16-18 = Soviet Math. Dokl. 8 (1967), 572-574. MR 35 #6659. 70a. Ju. A. Drozd, On the distribution of maximal sublattices, Mat. Zametki 6 (1969), 19-24. 70b. Ju. A. Drozd, Adèles and integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 33 (1969), 1080-1088. 71. Ju. A. Drozd, and V. V. Kiričenko, Representation of rings in a second order matrix algebra, Ukrain. Mat. Ž. 19 (1967), no. 3, 107-112. (Russian) MR 35 #1632. 72. Ju. A. Drozd, and V. V. Kiričenko, Hereditary orders, Ukrain. Mat. Ž. 20 (1967), 246-248. (Russian). 73. Ju. A. Drozd, V. V. Kiričenko and A. V. Roĭter, On hereditary and Bass orders, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 1415-1436 = Math. USSR Izv. 1 (1967), 1357-1376. MR 36 #2608. 74. Ju. A. Drozd and A. V. Roĭter, Commutative rings with a finite number of indecomposable integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 783-798 = Math. USSR Izv. 1 (1967), 757-772. MR 36 #3768. 75. Ju. A. Drozd and V. M. Turčin, Number of representation modules in a genus for integral second order matrix rings, Mat. Zametki 2 (1967), 133-138 = Math. Notes 2 (1967), 564-566. MR 37 #5253. 76. V. H. Dyson and K. W. Roggenkamp, Modules over orders, Springer Lecture Notes (to appear). 77. M. Eichler, Über die Idealklassenzahl total definiter Quaternionenalgebren, Math. Z. 43 (1938), 102-109. 78. M. Eichler, Über die Idealklassenzahl hyperkomplexer Zahlen, Math. Z. 43 (1938), 481-494. 79. D. K. Faddeev, On the semigroup of genera in the theory of integer representations, Izv. Akad. Nauk SSSR Ser. Mat. 28 (1964), 475-478; English transl., Amer. Math. Soc. Transl. (2) 64 (1967), 97-101. MR 28 #5089. 80. D. K. Faddeev, An introduction to multiplicative theory of modules of integral representations, Trudy Mat. Inst. Steklov. 80 (1965), 145-182 = Proc. Steklov Inst. Math. 80 (1965), 164-210. MR 34 #5873. 81. D. K. Faddeev, On the theory of cubic Z-rings, Trudy Mat. Inst. Steklov. 80 (1965), 183-187 = Proc. Steklov Inst. Math. 80 (1965), 211-215. MR 33 #4083. 82. D. K. Faddeevv, Equivalence of systems of integer matrices, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 449-454; English transl., Amer. Math. Soc. Transl. (2) 71 (1968), 43-48. MR 33 #2642. 83. D. K. Faddeev, The number of classes of exact ideals for Z-rings, Mat. Zametki 1 (1967), 625-632 = Math. Notes 1 (1967), 415-419. MR 35 #5466. 84. R. Fossum, The Noetherian different of projective orders, J. Reine Angew. Math. 224 (1966), 207-218. MR 36 #5119. 84a. R. Fossum, Maximal orders over Krull domains, J. Algebra 10 (1968), 321-332. MR 38 #2130. 85. A. Fröhlich, Ideals in an extension field as modules over the algebraic integers in a finite number field, Math. Z. 74 (1960), 29-38. MR 22 #4708. 86. A. Fröhlich, The module structure of Kummer extensions over Dedekind domains, J. Reine Angew. Math. 209 (1962), 39-53. MR 28 #3988; p. 1247. 87. A. Fröhlich, Invariants for modules over commutative separable orders, Quart. J. Math. Oxford Ser. (2) 16 (1965), 193-232. MR 35 #1583. 88. A. Fröhlich, Resolvents, discriminants, and trace invariants, J. Algebra 4 (1966), 173-198. MR 34 #7499. 89. I. Giorgiutti, Modules projectifs sur les algèbres de groupes finis, C.R. Acad. Sci. Paris 250 (1960), 1419-1420. MR 23 #A1691. 90. J. A. Green, On the indecomposable representations of a finite group, Math. Z. 70 (1958/59), 430-445. MR 24 #A1304. 91. J. A. Green, Blocks of modular representations, Math. Z. 79 (1962), 100-115. MR 25 #5114. 92. J. A. Green, The modular representation algebra of a finite group, Illinois J. Math. 6 (1962), 607-619. MR 25 #5106. 93. J. A. Green, A transfer theorem for modular representations, J. Algebra 1 (1964), 73-84. MR 29 #147. 94. K. Gruenberg, Residual properties of infinite soluble groups, Proc. London Math. Soc. (3) 7 (1957), 29-62. MR 19, 386. 95. P. M. Gudivok, Integral representations of a finite group with a noncyclic Sylow p-subgroup, Uspehi Mat. Nauk 16 (1961), 229-230. 96. P. M. Gudivok, Integral representations of groups of type (p, p), Dokl. Užgorod Univ. Ser. Phys.-Mat. Nauk (1962), no. 5, 73. 97. P. M. Gudivok, On p-adic integral representations of finite groups, Dokl. Užgorod Univ. Ser. Phys.-Mat. Nauk (1962), no. 5, 81-82. 98. P. M. Gudivok, Representations of finite groups over certain local rings, Dopovīdī Akad. Nauk Ukraïn. RSR 1964, 173-176. (Ukrainian) MR 29 #3551. 99. P. M. Gudivok, Representations of finite groups over quadratic rings, Dokl. Akad. Nauk SSSR 159 (1964), 1210-1213 =Soviet Math. Dokl. 5 (1964), 1669-1672. MR 30 #174. 100. P. M. Gudivok, Representations of finite groups over local number rings, Dopovīdī Akad. Nauk Ukraïn. RSR 1966, 979-981. (Ukrainian) MR 34 #1407. 101. P. M. Gudivok, Representations of finite groups over number rings, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 799-834 = Mat. USSR Izv. 1 (1967), 773-805. MR 36 #1554. 102. P. M. Gudivok and V. P. Rud'ko, On p-adic integer-valued representations of a cyclic p-group, Dopovīdī Akad. Nauk Ukraïn. RSR 1966, 1111-1113. (Ukrainian) MR 34 #1409. 103. T. Hannula, The integral representation ring a(R, Trans. Amer. Math. Soc. 133 (1968), 553-559. 104. M. Harada, Hereditary orders, Trans. Amer. Math. Soc. 107 (1963), 273-290. MR 27 #1474. 105. M. Harada, Structure of hereditary orders over local rings, J. Math. Osaka City Univ. 14 (1963), 1-22. MR 29 #5879. 106. M. Harada, Multiplicative ideal theory in hereditary orders, J. Math. Osaka City Univ. 14 (1963), 83-106. MR 29 #5880b. 107. A. Hattori, Rank element of a projective module, Nagoya Math. J. 25 (1965), 113-120. MR 31 #226. 108. A. Hattori, Semisimple algebras over a commutative ring, J. Math. Soc. Japan 15 (1963), 404-419. MR 28 #2125. 109. A. Heller, On group representations over a valuation ring, Proc. Nat. Acad. Sci. U.S.A. 47 (1961), 1194-1197. MR 23 #A2468. 110. A. Heller, Some exact sequences in algebraic K-theory, Topology 3 (1965), 389-408. MR 31 #3477. 111. A. Heller and I. Reiner, Indecomposable representations, Illinois J. Math. 5 (1961), 314-323. MR 23 #A222. 112. A. Heller and I. Reiner, Representations of cyclic groups in rings of integers. I, II, Ann. of Math. (2) 76 (1962), 73-92; (2) 77 (1963), 318-328. MR 25 #3993; MR 26 #2520. 113. A. Heller and I. Reiner, On groups with finitely many indecomposable integral representations, Bull. Amer. Math. Soc. 68 (1962), 210-212. MR 25 #1222. 114. A. Heller and I. Reiner, Grothendieck groups of orders in semisimple algebras, Trans. Amer. Math. Soc. 112 (1964), 344-355. MR 28 #5093. 115. A. Heller and I. Reiner, Grothendieck groups of integral group rings, Illinois J. Math. 9 (1965), 349-360. MR 31 #211. 116. D. G. Higman, Indecomposable representations at characteristic p, Duke Math. J. 21 (1954), 377-381. MR 16, 794. 117. D. G. Higman, Induced and produced modules, Canad. J. Math. 7 (1955), 490-508. MR 19, 390. 118. D. G. Higman, On orders in separable algebras, Canad. J. Math. 7 (1955), 509-515. MR 19, 527. 119. D. G. Higman, Relative cohomology, Canad. J. Math. 9 (1957), 19-34. MR 18, 715. 120. D. G. Higman, On isomorphisms of orders, Michigan Math. J. 6 (1959), 255-257. MR 22 #62. 121. D. G. Higman, On representations of orders over Dedekind domains, Canad. J. Math. 12 (1960), 107-125. MR 22 #63. 122. D. G. Higman and J. E. MacLaughlin, Finiteness of class numbers of representations of algebras over function fields, Michigan Math. J. 6 (1959), 401-404. MR 22 #39. 123. G. Higman, The units of group-rings, Proc. London Math. Soc. (2) 46 (1940), 231-248. MR 2, 5. 124. R. Holvoet, Sur l'isomorphie d'algèbres de groupes, Bull. Soc. Math. Belg. 20 (1968), 264-282. 124a. D. A. Jackson, On a problem in the theory of integral group rings, Ph.D. thesis, Oxford University, Oxford, 1967. 124b. D. A. Jackson, The group of units of the integral group ring of finite metabelian and finite nilpotent groups, Quart. J. Math. 20 (1969), 319-331. 125. H. Jacobinski, Über die Hauptordnung eines Körpers als Gruppenmodul, J. Reine Angew. Math. 213 (1963/64), 151-164. MR 29 #1200. 126. H. Jacobinski, On extensions of lattices, Michigan Math. J. 13 (1966), 471-475. MR 34 #4377. 127. H. Jacobinski, Sur les ordres commutatifs avec un nombre fini de réseaux indécomposables, Acta Math. 118 (1967), 1-31. MR 35 #2876. 128. H. Jacobinski, Über die Geschlechter von Gittern über Ordnungen, J. Reine Angew. Math. 230 (1968), 29-39. MR 37 #5250. 129. H. Jacobinski, Genera and decompositions of lattices over orders, Acta. Math. 121 (1968), 1-29. 130. H. Jacobinski, On embedding of lattices belonging to the same genus, Proc. Amer. Math. Soc. 24 (1970), 134-136. 131. N. Jacobson, The theory of rings, Math. Surveys, no. II, Amer. Math. Soc., Providence, R. I., 1943. MR 5, 31. 132. N. Jacobson, Structure of rings, Amer. Math. Soc. Colloq. Publ., vol. 37, Amer. Math. Soc., Providence, R. I., 1956. MR 18, 373. 133. W. E. Jenner, Block ideals and arithmetics of algebras, Compositio Math. 11 (1953), 187-203. MR 16, 7. 134. W. E. Jenner, On the class number of non-maximal orders in (P-adic division algebras, Math. Scand. 4 (1956), 125-128. MR 18, 375. 135. A. Jones, Groups with a finite number of indecomposable integral representations, Michigan Math. J. 10 (1963), 257-261. MR 27 #3698. 136. A. Jones, Integral representations of the direct product of groups, Canad. J. Math. 15 (1963), 625-630. MR 27 #4870. 137. A. Jones, On representations of finite groups over valuation rings, Illinois J. Math. 9 (1965), 297-303. MR 31 #257. 138. I. Kaplansky, Elementary divisors and modules, Trans. Amer. Math. Soc. 66 (1949), 464-491. MR 11, 155. 139. I. Kaplansky, Modules over Dedekind rings and valuation rings, Trans. Amer. Math. Soc 72 (1952), 327-340. MR 13, 719. 140. I. Kaplansky, Submodules of quaternion algebras, Proc. London Math. Soc. (3) 19 (1969), 219-232. 141. V. V. Kiričenko, Orders whose representations are all completely reducible, Mat. Zametki 2 (1967), 139-144 = Math. Notes 2 (1967), 567-570. MR 36 #2609. 142. D. I. Knee, The indecomposable integral representations of finite cyclic groups, Ph.D. Thesis, M.I.T., Cambridge, Mass., 1962. 143. M. Kneser, Einige Bemerkungen über ganzzahlige Darstellungen endlicher Gruppen, Arch. Math 17 (1966), 377-379. MR 34 #1408. 144. S. A. Krugljak, Exact ideals in a second order integral matrix ring, Ukrain. Mat. Ž. 18 (1966), no. 3, 58-64. (Russian) MR 33 #7378. 145. S. A. Krugljak, The Grothendieck group, Ukrain. Mat. Ž. 18 (1966), no. 5, 100-105. (Russian) MR 34 #204. 146. T. Y. Lam, Induction theorems for Grothendieck groups and Whitehead groups of finite groups, Ann. Sci. École Norm. Sup. (4) 1 (1968), 91-148. 147. R. Larson, Group rings over Dedekind domains. I, II, J. Algebra 5 (1967), 358-361; 7 (1967), 278-279. MR 35 #266; MR 35 #5525. 148. C. G. Latimer and C. C. MacDuffee, A correspondence between classes of ideals and classes of matrices, Ann. of Math, (2) 34 (1933), 313-316. 149. W. J. Leahey, The classification of the indecomposable integral representations of the dihedral group of order 2p, Ph.D. Thesis, M.I.T., Cambridge, Mass., 1962. 150. M. P. Lee, Integral representations of dihedral groups of order 2p, Trans. Amer. Math. Soc. 110 (1964), 213-231. MR 28 #139. 151. H. Leopoldt, Über die Hauptordnung der ganzen Elemente eines abelschen Zahlkörpers, J. Reine Angew. Math. 201 (1959), 119-149. MR 21 #7195. 152. L. S. Levy, Decomposing pairs of modules, Trans. Amer. Math. Soc. 122 (1966), 64-80. MR 33 #2677. 153. G. W. Mackey, On induced representations of groups, Amer. J. Math. 73 (1951), 576-592. MR 13, 106. 154. J.-M. Maranda, On B-adic integral representations of finite groups, Canad. J. Math. 5 (1953), 344-355. MR. 15, 100. 155. J.-M. Maranda, On the equivalence of representations of finite groups by groups of automorphisms of modules over Dedekind rings, Canad. J. Math. 7 (1955), 516-526. MR 19, 529. 155a. J. Martinet, Sur l'arithmétique des extensions galoisiennes à groupe de Galois diédral d'ordre 2p, Ann. Inst. Fourier (Grenoble) 19 (1969) (to appear). 156. A. Matuljauskas, Integral representations of a fourth-order cyclic group, Litovsk. Mat. Sb. 2 (1962), no. 1, 75-82. (Russian) MR 26 #6274. 157. A. Matuljauskas, Integral representations of the cyclic group of order six, Litovsk. Mat. Sb. 2 (1962), no. 2, 149-157. (Russian) MR 27 #5835. 158. A. Matuljauskas, On the number of indecomposable representations of the group Z, Litovsk. Mat. Sb 3 (1963), no. 1, 181-188. (Russian) MR 29 #2309. 159. A. Matuljauskas and M. Matuljauskene, On integral representations of a group of type (3, 3), Litovsk. Mat. Sb. 4 (1964), 229-233. (Russian) MR 29 #4812. 160. Warren May, Commutative group algebras, Trans. Amer. Math. Soc. 136 (1969), 139-149. MR 38 #2224. 160a. G. O. Michler, Structure of semi-perfect hereditary noetherian rings, J. Algebra 13 (1969), 327-344. 161. T. Nakayama, A theorem on modules of trivial cohomology over a finite group, Proc. Japan Acad. 32 (1956), 373-376. MR 18, 191. 162. T. Nakayama, On modules of trivial cohomology over a finite group, Illinois J. Math. 1 (1957), 36-43. MR 18, 793. 163. T. Nakayama, On modules of trivial cohomology over a finite group. II: Finitely generated modules, Nagoya Math. J. 12 (1957), 171-176. MR 20 #4587. 164. L. A. Nazarova, Unimodular representations of the four group, Dokl. Akad. Nauk SSSR 140 (1961), 1011-1014 = Soviet Math. Dokl. 2 (1961), 1304-1307. MR 24 #A770. 165. L. A. Nazarova, Unimodular representations of the alternating group of degree four, Ukrain. Mat. Ž. 15 (1963), 437-444. (Russian) MR 28 #2148. 166. L. A. Nazarova, Representations of a tetrad, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 1361-1378 = Math. USSR Izv. 1 (1967), 1305-1322. MR 36 #6400. 167. L. A. Nazarova and A. V. Roĭter, Integral representations of the symmetric group of third degree, Ukrain. Mat. Ž. 14 (1962), 271-288. (Russian) MR 26 #6273. 168. L. A. Nazarova and A. V. Roĭter, On irreducible representations of p-groups over Z, Ukrain. Mat. Ž. 18 (1966), no 1, 119-124. (Russian) MR 34 #254. 169. L. A. Nazarova and A. V. Roĭter, On integral p-adic representations and representations over residue class rings, Ukrain. Mat. Ž. 19 (1967), no. 2, 125-126. (Russian) MR 35 #267. 170. L. A. Nazarova and A. V. Roĭter, Refinement of a theorem of Bass, Dokl. Akad. Nauk SSSR 176 (1967), 266-268 = Soviet Math. Dokl. 8 (1967), 1089-1092. MR 37 #1402. 171. L. A. Nazarova and A. V. Roĭter, Finitely generated modules over a dyad of a pair of local Dedekind rings, and finite groups having an abelian normal subgroup of index p, Izv. Akad. Nauk SSSR Ser. Mat. 33 (1969), 65-89 = Math. USSR Izv. 3 (1969). 172. M. Newman and O. Taussky, Classes of positive definite unimodular circulants, Canad. J. Math. 9 (1957), 71-73. MR 18, 634. 173. M. Newman and O. Taussky, On a generalization of the normal basis in abelian algebraic number fields, Comm. Pure Appl. Math. 9 (1956), 85-91. MR 17, 829. 174. R. J. Nunke, Modules of extensions over Dedekind rings, Illinois J. Math. 3 (1959), 222-241. MR 21 #1329. 175. T. Obayashi, On the Grothendieck ring of an abelian p-group, Nagoya Math. J. 26 (1966), 101-113. MR 37 #1438. 176. J. Oppenheim, Integral representations of cyclic groups of squarefree order, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1962. 177. D. S. Passman, Nil ideals in group rings, Michigan Math. J. 9 (1962), 375-384. MR 26 #2470. 178. D. S. Passman, Isomorphic groups and group rings, Pacific J. Math. 15 (1965), 561-583. MR 33 #1381. 179. L. C. Pu, Integral representations of non-abelian groups of order pq, Michigan Math. J. 12 (1965), 231-246. MR 31 #2321. 180. T. Ralley, Decomposition of products of modular representations, J. London Math. Soc. 44 (1969), 480-484. 181. I. Reiner, Maschke modules over Dedekind rings, Canad. J. Math. 8 (1956), 329-334. MR 18, 7. 182. I. Reiner, Integral representations of cyclic groups of prime order, Proc. Amer. Math. Soc. 8 (1957), 142-146, MR 18, 717. 183. I. Reiner, On the class number of representations of an order, Canad. J. Math. 11 (1959), 660-672. MR 21 #7229. 184. I. Reiner, The nonuniqueness of irreducible constituents of integral group representations, Proc. Amer. Math. Soc. 11 (1960), 655-658. MR 23 #A223. 185. I. Reiner, Behavior of integral group representations under ground ring extension, Illinois J. Math. 4 (1960), 640-651. MR 22 #12145. 186. I. Reiner, The Krull-Schmidt theorem for integral group representations, Bull. Amer. Math. Soc. 67 (1961), 365-367. MR 25 #2132. 187. I. Reiner, Indecomposable representations of non-cyclic groups, Michigan Math. J. 9 (1962), 187-191. MR 25 #3994. 188. I. Reiner, Failure of the Krull-Schmidt theorem for integral representations, Michigan Math. J. 9 (1962), 225-231. MR 26 #2482. 189. I. Reiner, Extensions of irreducible modules, Michigan Math J. 10 (1963), 273-276. MR 27 #5807. 190. I. Reiner, The integral representation ring of a finite group, Michigan Math. J. 12 (1965), 11-22. MR 30 #3152. 191. I. Reiner, Nilpotent elements in rings of integral representations, Proc. Amer. Math. Soc. 17 (1966), 270-274. MR 32 #5745. 192. I. Reiner, Integral representation algebras, Trans. Amer. Math. Soc. 124 (1966), 111-121. MR 34 #2722. 193. I. Reiner, Relations between integral and modular representations, Michigan Math. J. 13 (1966), 357-372. MR 36 #5240. 194. I. Reiner, Module extensions and blocks, J. Algebra 5 (1967), 157-163. MR 35 #4316. 195. I. Reiner, Representation rings, Michigan Math. J. 14 (1967), 385-391. MR 36 #1555. 196. I. Reiner, An involution on K, Mat. Zametki 3 (1968), 523-527. (Russian) MR 37 #5270. 197. I. Reiner, A survey of integral representation theory, Proc. Algebra Sympos., University of Kentucky (Lexington, 1968), pp. 8-14. 198. I. Reiner, Maximal orders, Mimeograph Notes, University of Illinois, Urbana, Ill., 1969. 199. I. Reiner and H. Zassenhaus, Equivalence of representations under extensions of local ground rings, Illinois J. Math. 5 (1961), 409-411. MR 23 #A3764. 200. D. S. Rim, Modules over finite groups, Ann. of Math. (2) 69 (1959), 700-712. MR 21 #3474. 201. D. S. Rim, On projective class groups, Trans. Amer. Math. Soc. 98 (1961), 459-467. MR 23 #A1690. 202. K. W. Roggenkamp, Gruppenringe von unendlichem Darstellungstyp, Math. Z. 96 (1967), 393-398. MR 34 #5948. 203. K. W. Roggenkamp, Darstellungen endlicher Gruppen in Polynomringen, Math. Z. 96 (1967), 399-407. MR 34 #5949. 204. K. W. Roggenkamp, Grothendieck groups of hereditary orders, J. Reine Angew. Math. 235 (1969), 29-40. 205. K. W. Roggenkamp, On the irreducible lattices of orders, Canad. J. Math. 21 (1969), 970-976. 206. K. W. Roggenkamp, Das Krull-Schmidt Theorem für projektive Gitter in Ordnungen über lokalen Ringen, Math. Seminar (Giessen, 1969). 207. K. W. Roggenkamp, Projective modules over clean orders, Compositio Math. 21 (1969), 185-194. 208. K. W. Roggenkamp, A necessary and sufficient condition for orders in direct sums of complete skewfields to have only finitely many nonisomorphic indecomposable integral representations, Bull. Amer. Math. Soc. 76 (1969), 130-134. 208a. K. W. Roggenkamp, Projective homorphisms and extensions of lattices, J. Reine Angew. Math. (to appear). 208b. K. W. Roggenkamp and V. H. Dyson, Modules over orders, Springer Lecture Notes (to appear). 209. A. V. Roĭter, On the representations of the cyclic group of fourth order by integral matrices, Vestnik Leningrad. Univ. 15 (1960), no. 19, 65-74. (Russian) MR 23 #A1730. 210. A. V. Roĭter, Categories with division and integral representations, Dokl. Akad. Nauk SSSR 153 (1963), 46-48 = Soviet Math. Dokl. 4 (1963), 1621-1623. MR 33 #2704. 211. A. V. Roĭter, On a category of representations, Ukrain. Mat. Z. 15 (1963), 448-452. (Russian) MR 28 #3072. 212. A. V. Roĭter, On integral representations belonging to a genus, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 1315-1324; English transl., Amer. Math. Soc. Transl. (2) 71 (1968), 49-59. MR 35 #4255. 213. A. V. Roĭter, Divisibility in the category of representations over a complete local Dedekind ring, Ukrain. Mat. Ž. 17 (1965), no. 4, 124-129. (Russian) MR 33 #5699. 214. A. V. Roĭter, E-systems of representations, Ukrain. Mat. Ž. 17 (1965), no. 2, 88-96. (Russian) MR 32 #7620. 215. A. V. Roĭter, An analog of Bass' theorem for representation modules of non-commutative orders, Dokl. Akad. Nauk SSSR 168 (1966), 1261-1264 = Soviet Math. Dokl. 7 (1966), 830-833. MR 34 #2632. 216. A. V. Roĭter, Unboundedness of the dimensions of indecomposable representations of algebras having infinitely many indecomposable representations, Izv. Akad. Nauk SSSR Ser. Mat. 32 (1968), 1275-1282 = Math. USSR Izv. 2 (1968) (to appear). 217. A. V. Roĭter, On the theory of integral representations of rings, Mat. Zametki 3 (1968), 361-366. (Russian) MR 38 #187. 218. J. Rotman, Homological algebra, Van Nostrand, Princeton, N. J., 1970. 219. V. P. Rud'ko, On the integral representation algebra of a cyclic group of order p, Dopovīdī Akad. Nauk Ukrain. RSR Ser. A 1967, 35-39. (Ukrainian) MR 35 #268. 220. A. I. Saksonov, On group rings of finite p-groups over certain integral domains, Dokl. Akad Nauk BSSR 11 (1967), 204-207. (Russian) MR 35 #270. 221. A. I. Saksonov, Group-algebras of finite groups over a number field, Dokl. Akad. Nauk BSSR 11 (1967), 302-305. MR 35 #1681. 222. O. F. G. Schilling, The theory of valuations, Math. Surveys, no. 4, Amer. Math. Soc., Providence, R. I., 1950. MR 13, 315. 223. H. Schneider and J. Weissglass, Group rings, semigroup rings and their radicals, J. Algebra 5 (1967), 1-15. MR 35 #4317. 224. S. K. Sehgal, On the isomorphism of integral group rings. I, II, Canad. J. Math. 21 (1969), 410-413, 1182-1188. 225. C. S. Seshadri, Triviality of vector bundles over the affine space K, Proc. Nat. Acad. Sci. U. S. A. 44 (1958), 456-458. MR 21 #1318. 226. C. S. Seshadri, Algebraic vector bundles over the product of an affine curve and the affine line, Proc. Amer. Math. Soc. 10 (1959), 670-673. MR 29 #2263. 226a. M. Singer, Invertible powers of ideals over orders in commutative separable algebras, Proc. Cambridge Philos. Soc. (to appear). 227. D. L. Stancl, Multiplication in Grothendieck rings of integral group rings, J. Algebra 7 (1967), 77-90. MR 36 #6476. 228. E. Steinitz, Rechteckige Systeme und Moduln in algebraischen Zahlenkörpern. I, II, Math. Ann. 71 (1911), 328-354; 72 (1912), 297-345. 229. J. R. Strooker, Faithfully projective modules and clean algebras, Ph.D. Thesis, University of Utrecht, 1965. 230. R. G. Swan, Projective modules over finite groups, Bull. Amer. Math. Soc. 65 (1959), 365-367. MR 22 #5660. 231. R. G. Swan, The p-period of a finite group, Illinois J. Math. 4 (1960), 341-346. MR 23 #A188. 232. R. G. Swan, Induced representations and projective modules, Ann. of Math. (2) 71 (1960), 552-578. MR 25 2131. 233. R. G. Swan, Projective modules over group rings and maximal orders, Ann. of Math. (2) 76 (1962), 55-61. MR 25 #3066. 234. R. G. Swan, The Grothendieck ring of a finite group, Topology 2 (1963), 85-110. MR 27 #3683. 235. R. G. Swan, Algebraic K-theory, Springer Lecture Notes, Berlin, 1968. 236. R. G. Swan, Invariant rational functions and a problem of Steenrod, Invent. Math. 7 (1969), 148-158. 237. R. G. Swan, The number of generators of a module, Math. Z. 102 (1967), 318-322. MR 36 #1434. 238. S. Takahashi, Arithmetic of group representations, Tôhoku Math. J. (2) 11 (1959), 216-246. MR 22 #733. 239. S. Takahashi, A characterization of group rings as a special class of Hopf algebras, Canad. Math. Bull. 8 (1965), 465-475. MR 32 #2459. 240. O. Taussky, On a theorem of Latimer and MacDuffee, Canad. J. Math. 1 (1949), 300-302. MR 11, 3. 241. O. Taussky, Classes of matrices and quadratic fields, Pacific J. Math. 1 (1951), 127-132. MR 13, 201. 242. O. Taussky, Classes of matrices and quadratic fields. II, J. London Math. Soc. 27 (1952), 237-239. MR 13, 717. 243. O. Taussky, Unimodular integral circulants, Math. Z. 63 (1955), 286-289. MR 17, 347. 244. O. Taussky, On matrix classes corresponding to an ideal and its inverse, Illinois J. Math. 1 (1957), 108-113. MR 20 #845. 245. O. Taussky, Matrices of rational integers, Bull. Amer. Math. Soc. 66 (1960), 327-345. MR 22 #10994. 246. O. Taussky, Ideal matrices. I, Arch. Math. 13 (1962), 275-282. MR 27 #168. 247. O. Taussky, Ideal matrices. II, Math. Ann. 150 (1963), 218-225. MR 28 #105. 248. O. Taussky, On the similarity transformation between an integral matrix with irreducible characteristic polynomial and its transpose, Math. Ann. 166 (1966), 60-63. MR 33 #7355. 249. O. Taussky, The discriminant matrices of an algebraic number field, J. London Math. Soc. 43 (1968), 152-154. MR 37 #4053. 250. O. Taussky and J. Todd, Matrices with finite period, Proc. Edinburgh Math. Soc. (2) 6 (1940), 128-134. MR 2, 118. 251. O. Taussky and J. Todd, Matrices of finite period, Proc. Roy. Irish Acad. Sect. A 46 (1941), 113-121. MR 2, 243. 252. O. Taussky and H. Zassenhaus, On the similarity transformation between a matrix and its transpose, Pacific J. Math. 9 (1959), 893-896. MR 21 #7216. 253. John G. Thompson, Vertices and sources, J. Algebra 6 (1967), 1-6. MR 34 #7677. 254. A. Troy, Integral representations of cyclic groups of order p, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1961. 255. K. Uchida, Remarks on Grothendieck groups, Tôhoku Math. J. (2) 19 (1967), 341-348. MR 37 #2838. 256. S. Ullom, Normal bases in Galois extensions of number fields, Nagoya Math. J. 34 (1969), 153-167. 257. S. Ullom, Galois cohomology of ambiguous ideals, J. Number Theory 1 (1969), 11-15. 258. Y. Watanabe, The Dedekind different and the homological different, Osaka J. Math. 4 (1967), 227-231. MR 37 #2795. 259. Y. Watanabe, The Dedekind different and the homological different of an algebra, J. Math. Soc. Japan (to appear). 260. A. Weil, Basic number theory, Die Grundlehren der Math. Wissenschaften, Band 114, Springer-Verlag, New York, 1967. MR 38 #3244. 261. A. R. Whitcomb, The group ring problem, Ph.D. thesis, University of Chicago, Chicago, Ill., 1968. 262. O. Zariski and P. Samuel, Commutative algebra. Vol. I, University Series in Higher Math., Van Nostrand, Princeton, N. J., 1958. MR 19, 833. 263. H. Zassenhaus, Neuer Beweis der Endlichkeit der Klassenzahl bei unimodularer Aquivalenz endlicher ganzzahliger Substitutionsgruppen, Abh. Math. Sem. Univ. Hamburg 12 (1938), 276-288. 264. H. Zassenhaus, Über die Äquivalenz ganzzahliger Darstellungen, Nachr. Akad. Wiss. Göttingen Math.-Phys. Kl. II 1967, 167-193. MR 37 #6319. 265. J. Zemanek, On the semisimplicity of integral representation rings, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1970.
Retrieve articles in Bulletin of the American Mathematical Society with MSC (1970): 1075, 1548, 2080, 1640, 1069, 1620
Retrieve articles in all journals with MSC (1970): 1075, 1548, 2080, 1640, 1069, 1620
Join the AMS
AMS Conferences
News & Public Outreach
Math in the Media
Mathematical Imagery
Mathematical Moments
Data on the Profession
Fellows of the AMS
Mathematics Research Communities
AMS Fellowships
Collaborations and position statements
Appropriations Process Primer
Congressional briefings and exhibitions
About the AMS
Jobs at AMS
Notices of the AMS · Bulletin of the AMS
American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267
AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office.
© Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility | CommonCrawl |
BMC Medical Research Methodology
Bayesian alternatives to null hypothesis significance testing in biomedical research: a non-technical introduction to Bayesian inference with JASP
Riko Kelter1
BMC Medical Research Methodology volume 20, Article number: 142 (2020) Cite this article
Although null hypothesis significance testing (NHST) is the agreed gold standard in medical decision making and the most widespread inferential framework used in medical research, it has several drawbacks. Bayesian methods can complement or even replace frequentist NHST, but these methods have been underutilised mainly due to a lack of easy-to-use software. JASP is an open-source software for common operating systems, which has recently been developed to make Bayesian inference more accessible to researchers, including the most common tests, an intuitive graphical user interface and publication-ready output plots. This article provides a non-technical introduction to Bayesian hypothesis testing in JASP by comparing traditional tests and statistical methods with their Bayesian counterparts.
The comparison shows the strengths and limitations of JASP for frequentist NHST and Bayesian inference. Specifically, Bayesian hypothesis testing via Bayes factors can complement and even replace NHST in most situations in JASP. While p-values can only reject the null hypothesis, the Bayes factor can state evidence for both the null and the alternative hypothesis, making confirmation of hypotheses possible. Also, effect sizes can be precisely estimated in the Bayesian paradigm via JASP.
Bayesian inference has not been widely used by now due to the dearth of accessible software. Medical decision making can be complemented by Bayesian hypothesis testing in JASP, providing richer information than single p-values and thus strengthening the credibility of an analysis. Through an easy point-and-click interface researchers used to other graphical statistical packages like SPSS can seemlessly transition to JASP and benefit from the listed advantages with only few limitations.
Null hypothesis significance testing (NHST) remains the dominating inferential approach in medical research [1–4]. The results of medical research therefore stand on the shoulders of the frequentist statistical philosophy, which roots back to the early days of Fisher [5] and Neyman-Pearson [6]. The centerpiece of frequentist inference is a test statistic T, which can be computed from the raw data, and which is known to have a specific distribution F under the null hypothesis H0. If the observed value of the test statistic passes a given threshold, which is located in the tails of F, then the null hypothesis H0 is rejected, because observing such a value would be quite unplausible if H0 were true. The well known p-value states exactly the probability of observing a result as extreme as the one observed or even more extreme when the null hypothesis H0 were true.
While the agreed standard in medical decision making, in the last few years more and more problems inherent in NHST have been revealed [7–9]. The misuse and abuse of p-values in particular in medical research have been criticised in countless venues, and the official American Statistical Association (ASA) statement in 2016 and 2019 by Wasserstein and Lazar [10] and Wasserstein et al. [11] show that most problems of NHST have not been solved by now. The ongoing use of NHST also indicates that the p-value as a measure of significance is still widely accepted despite its drawbacks and stays resilient to the repeated criticism [12]. As the limitations of p-values have been discussed widely, only three important problems are listed here, which are especially harmful in medical decision making and research: (1) it is known that p-values are prone to overestimating effects [13]; (2) they inevitably state effects if none exist with a fixed probability [14]; (3) they are prone to false interpretation by researchers [15]. This problem is in particular problematic in clinical decision making with possibly devastating consequences for patients and the progress of medical science, see Ioannidis [9, 16]. Especially point (2) is crucial, as not only for medical science but in much more generality, McElreath and Smaldino [17] stress that "the most important factors in improving the reliability of research are the rate of false positives".
To solve the above problems inherent to NHST, researchers from the University of Amsterdam have developed the open-source statistical software JASP [18], which is an acronym for Jeffreys Awesome Statistics Package, referring to the pioneer of Bayesian statistics who invented the Bayes factor, Sir Harold Jeffreys [19]. JASP is available for all common operating systems and provides both frequentist NHST as well as Bayesian tests and methods. Installation is straightforward and there is rich documentation in form of tutorials and videos on the project website. A strength of JASP is its spreadsheet design similar to SPSS, making it possible to conduct state of the art analyses with a single click instead of programming complicated routines in statistical programming languages like R [20]. Also, to foster reproducible medical research, JASP offers seamless integration with the Open Science Framework [21] as well as shareable JASP-files which include all data and analyses, to promote collaboration and transparency. Next to this, JASP also benefits from rich annotations and information to enhance understanding of the applied procedures. To understand how JASP tackles the problems of NHST it is important to understand the differences of the proposed Bayesian methods, which are reviewed therefore briefly in the following.
NHST with its p-values is located in the frequentist school of statistics and was created to control the type I error rate in the long run, that is to limit the number of false positives in a large succession of repeatable experiments or studies. The Bayesian school of thought was not designed with type I error control in mind and proceeds via allocating relative evidence to a hypothesis H given the data x [22]. In the Bayesian paradigm, available prior information is combined with the model likelihood to obtain the posterior distribution of the parameters of interest [23]. Bayesian hypothesis testing is then often done via the Bayes factor BF10, the predictive updating factor which measures the change in relative beliefs about hypothesis H1 relative to hypothesis H0 given the data x:
$$ \begin{aligned} \underbrace{\frac{p\left(x|H_{1}\right)}{p\left(x|H_{0}\right)}}_{BF_{10}(x)} =\underbrace{\frac{\mathbb{P}\left(H_{1}|x\right)}{\mathbb{P}\left(H_{0}|x\right)}}_{\text{Posterior odds}} \cdot \underbrace{\frac{\mathbb{P}\left(H_{0}\right)}{\mathbb{P}\left(H_{1}\right)}}_{\text{Prior odds}} \end{aligned} $$
The Bayes factor BF10 therefore quantifies the evidence by indicating how much more likely the observed data are under the rival models. Note that the Bayes factor critically depends on the prior distributions assigned to the parameters in each of the models, as the parameter values determine the models' predictions. It can also be rewritten as the ratio of posterior and prior odds. Bayesian parameter estimation for an unknown parameter θ in general is achieved by considering the posterior distribution p(θ|x) of the parameter after observing the data x:
$$ \begin{aligned} p\left(\theta|x\right)=\frac{p\left(x|\theta\right)\cdot p\left(\theta\right)}{p(x)} \end{aligned} $$
where p(x|θ) is the likelihood function, p(θ) the prior, and in most realistic settings, the marginal likelihood p(x) in the denumerator cannot be calculated in closed form or is prohibitively effortful to compute. Therefore, Markov-Chain-Monte-Carlo (MCMC) algorithms have been developed in the last decades, alleviating the requirement of computing p(x) from practitioners, because most MCMC algorithms only need a function proportional to the posterior to work, so that
$$ \begin{aligned} {p(\theta|x)\propto p(x|\theta)\cdot p(\theta)} \end{aligned} $$
suffices. Equation (2) also implies, that specifying the prior p(θ) and likelihood p(x|θ) allows researchers to numerically obtain the posterior via MCMC.
In both the hypothesis testing as well as parameter estimation perspective in Bayesian inference, the role of the prior is crucial. The prior distribution quantifies the prior information about any parameters in the model before the data x are acutally observed. In contrast, the classical frequentist philosophy proceeds without any prior information, obtaining the same results no matter if there is much evidence in form of a large number of previous studies which all yielded identical results, or no evidence due to no available prior studies at all. While this may bring a subjective flavour with it, selecting an appropriate prior is a topic of huge relevance in Bayesian literature, as extreme priors can shrink the posterior estimates of a parameter or the obtained Bayes factor into a desired direction specified by the prior's shape. Luckily, there is an unspoken agreement to use uninformative pri ors in most cases [22, 24], especially when no prior nformation is available (for example in form of results of pilot studies). This makes it easy to use a suitable prior in most standard tests and methods. For example, in medical research most often the effect size d of Cohen [25] is important. The effect size is used to quantify the effect of a treatment, or the effect between a treatment and control group, and a priori it is reasonable to assume that very large effects |d|>1 are less probable than small effects |d|≤1, as often in biomedical research small to medium effect sizes (0.2≤|d|<0.5) are observed. Common choices of prior distributions for the effect size are the normal distribution [26], t-distribution and the Cauchy distribution [27]. A common approach also includes to use uniform priors or priors with extremely large scale parameters like \(\mathcal {N}(0,500)\) if no information is available for the parameter of interest [24]. It should be noted that this approach is problematic and should be avoided, as it can be shown that the a priori assumption then often degenerates to statements which believe much more probability mass in the tails as in the center of the distribution, essentially making the prior distributional assumption questionable. For example, a \(\mathcal {N}(0,500)\) prior will tend to put much more probability mass on unreasonable parameter values than reasonable ones. To be more specific, this prior implies that one believes a priori that \(\mathbb {P}(|\theta |<250)<\mathbb {P}(|\theta |>250)\), which is easily shown by calculating \(\mathbb {P}(-250<\theta <250)\approx 0.38\). Even worse, pioneers of Bayesian inference like Jeffreys [27] already noticed that such unrealistic overdispersed priors can lead to situations in which the Bayes factor always signals evidence for the null hypothesis H0, even if the data x are indeed generated by the alternative H1. To prevent such problems, often slightly informative or weakly informative priors are used, which span a realistic range of values of the parameter a priori, but are not completely flat [28].
If a reasonable weakly informative prior is selected, typically Bayes factors between 1/100 and 100 are observed in medical research, and the reporting guidelines for JASP are therefore built on this scale [29]. While there are multiple scales for translating a Bayes factor into a qualitative statement about the evidence it resembles [27, 29, 30], these proposals do not differ drastically. One benefit is that by reporting the actual Bayes factor instead of "moderate evidence" or "strong evidence" researchers can quantify the evidence based on the reported Bayes factor themselves if desired. The oldest classification or labeling scheme goes back to Jeffreys [27], and the reporting guidelines of JASP are an adoption of the original Jeffreys scale. The JASP guidelines seperate between "anecdotal", "moderate", "strong", "very strong" and "extreme" relative evidence for a hypothesis based on the size of the Bayes factor obtained.
Figure 1 shows the classification scheme proposed for reporting results obtained in JASP. While the scale chosen is arbitrary, the scheme offers a good starting point for judging the relative evidence for the alternative hypothesis compared to the null hypothesis in light of the observed data x. Note that not all circumstances and research contexts require the same scaling: The obtained Bayes factor depends on the prior selected, so that heavily unrealistic hypothesis should require much larger Bayes factors to confirm the a priori unprobable statement in contrast to highly likely hypotheses, which have been confirmed in multiple previous studies already. A research hypothesis with low prior probability will therefore require a convincing Bayes factor such that the evidence overcomes the initial skepticism and the model attains considerable posterior credibility. Therefore, it is important to consider the prior odds carefully when performing such analyses instead of using isolated Bayes factors only. Nevertheless, the scheme provides a consensus which researchers can use for orientation when reporting results. In particular, it is a good starting point when a weakly informative prior is used. Such priors are prebuilt into JASP and can be selected there.
JASP classification scheme for the Bayes factor BF10
A more severe problem than their dependence on the prior with Bayes factors is that no matter what scale is used, they only state relative evidence instead of absolute evidence. This means that even a BF10=100 which states extreme evidence for the alternative over the null hypothesis only indicates that a change in beliefs about the hypotheses under consideration is necessitated strongly. But even then both hypotheses can be bad descriptions of the real underlying situation. Therefore, it is recommended always to report the labels with the prefix relative, that is in the above case one can state extreme evidence for H1 relative to H0, but not for H1 relative to any other set of or even possibly the set of all other hypotheses.
When the prior modelling is considered and it is kept in mind that the BF only states relative evidence for a hypothesis, the BF can safely be used to gauge the relative evidence for a hypothesis.
JASP is written in C++, using the Qt toolkit [18]. The analyses themselves are written in either R or C++ to improve the speed of especially simulation based methods. The display layer where the data in form of tables are rendered is written in javascript, and is built on top of jQuery UI and webkit.
Regarding the future, JASP is currently supported by some long-term grants that fund the JASP team of software developers, academics and students. The team includes four main software developers as well as several core members which have tenured positions. Of particular importance is the psychological methods group at the University of Amsterdam, which is dedicated to long-term support for JASP [31].
Documentation and manual
Documentation of the implemented methods can be found at the official JASP site. There are both written tutorials as well as video tutorials which show how to conduct a given method. Also, the JASP reporting guidelines [29] offer an overview about some of the most important tests and methods available and how to report the results of an analysis. The official JASP site offers both a textbook for students [32] and additionally there is a textbook for learning statistics with JASP [33]. Both of these are free. Also, there are additional teaching materials and a user forum to support exchange and development of new features.
A particular nice feature of JASP is also given by the fact that it comes with an included data library, consisting of over 50 data sets to illustrate a variety of analyses.
In summary, documentation is rich and provides easy access and a flat learning curve.
Flexibility and ease-of-use
JASP includes both frequentist and Bayesian methods, and this is a particular strength, as few competitors include that broad a palette of Bayesian methods. Next to this flexibility, ease-of-use is supported through an interactive live view where analyses are done in real time and added to the results page. The interface of JASP is intuitive and consists of a data page displaying the loaded data set, an analysis page, displaying the analyses which are carried out on this data set, and a results page which includes all results and plots of conducted analyses. In summary therefore, JASP can be judged as flexible and easy to use.
To study the behaviour of Bayesian methods in JASP, three typical questions arising in medical research are used as a scaffold: (1) Do multiple groups (treatment one, treatment two, control) differ on an observed metric variable, and if so, how large is the effect size? (2) Do two groups (treatment, control) differ on an observed metric variable, and if so, how large is the effect size between both groups? (3) How strong is the relationship between two observed variables? Usually NHST in form of (1) an analysis of variance (ANOVA) (2) a two-sample t-test and (3) linear regression is used to reject a null hypothesis via the use of p-values. In the following, it will be shown that Bayesian versions of these statistical procedures can complement NHST and provide even richer information for medical research. A compelling feature here is, that both traditional as well as the Bayesian methods can be run in JASP seamlessly [31, 34], so that methodological flexibility is guaranteed.
The aim of this paper therefore is to demonstrate JASPs ability to conduct Bayesian hypothesis testing and parameter estimation as well as NHST via p-values. However, it is argued that richer information is provided when shifting to the Bayesian paradigm, which allows for better medical decision making as currently often done in form of frequentist rejection of null hypotheses. Also, the results show that the transition can be achieved almost effortlessly, as JASP offers an intuitive graphical interface and covers a wide range of Bayesian counterparts for commonly used tests in medical research with rich annotations for correct interpretation and reporting.
Three datasets from medical research were used to compare NHST and Bayesian tests in JASP. The first dataset is from Moore and colleagues [35], and consists of 800 patients which had to exercise for six minutes. After the six minutes, heart rates of male and female patients were recorded. All patients were additionally classified as runners or sedentary patients, depending on averaging more than 15 miles per week or not, so that in total two treatment and two control groups of size 200 each sum up to 800 participants.
Question (1) – analysis of variance (ANOVA)
A typical question in medical research would be to find out any differences between gender as well as both groups, leading to the setting of a 2×2 between subjects ANOVA for the variables group and gender. More specifically, a test for the hypothesis of differing average heart rates between gender and control and treatment groups is desired. The results of the frequentist ANOVA conducted in JASP are shown in Table 1. The output shows that both gender and group are significant variables as well as the interaction term for gender and group. All quantities of the ANOVA calculations, sum of squares, degrees of freedom, mean square, F-statistic, η2 and the p-value are given. Also, the Vovk-Sellke Maximum Ratio (VS-MPR*) is given based on the p-value, which is the maximum possible odds in favor of H1 over H0.
Table 1 ANOVA - Heart Rate
One nice feature of JASP is that it offers the option to include assumption checks for the tests conducted: For the ANOVA, homogeneity of variance is required, and the included assumption check in form of Levene's test is given in Table 2, showing that the assumption is violated. Still, investigating the provided Q-Q-plot in JASP (see Fig. 2a) shows that due to the balanced design of 200 participants in each sample and a high power due to 800 participants in total, the ANOVA will be relatively robust to the violations. Conducting a Bayesian ANOVA on the same data in JASP yields the results given in Table 3. There are five distinct models for each of which the prior probability P(M), the posterior probability P(M|data), the change from prior odds to posterior odds BFM for each model, and the Bayes factor BF10 for the relative evidence of the alternative hypothesis H1 compared to the null hypothesis H0 as well as the error in percent is given. This is necessary, because for some analyses the results are based on numerical algoritms such as Markov chain Monte Carlo (MCMC), which yields an error percentage (for more details on the computation see [29]). The error percentage thus is an estimate of the numerical error in the computation of the Bayes factor via Gaussian quadrature in the BayesFactor R package [36] JASP uses internally, and values below 20% are deemed acceptable [37]. If the error percentage is deemed too high, the number of samples can be increased to reduce the error percentage at the cost of longer computation time. Also, the BFM column shows the change from prior odds to posterior odds for each model. For example, for the full model including both main effects as well as their interaction effect, the prior odds are 0.2/(1−0.2)=0.25, while the posterior odds are 0.790/(1−0.790)=3.761905, leading to a ratio of 3.761905/0.25=15.04762, as shown in the BFM column. All models are compared to the null model here, where the null model includes no predictor variables at all, and the full model includes both variables gender and group as well as their interaction term. It is clear that the BF10 of 3.463e+125 is largest for this last most complex model, indicating extreme evidence for this model according to Fig. 1 and the reporting guidelines for JASP [29]. Also, the BF10 column contains the Bayes factor that quantifies evidence for this model relative to the null model with no variables included, therefore it is 1 for the null model row. While the BFM column thus states that the most complex model is the most probable a posteriori (because the prior odds were identical for all models, so that BFM is largest iff P(M|data) is largest), the BF10 column also shows that the most complex model predicts the data best. Therefore, the Bayes factor indicates extreme evidence for the full model. It may be of interest to obtain a Bayes factor \(BF_{10}(\mathcal {M}_{\text {main effects vs. full}})\) for comparison of the full model including the interaction effect, and the model with both main effects. This is straightforward, as due to the transitivity of the Bayes factor, it is clear that
$$\begin{array}{*{20}l} &\frac{BF_{10}\left(\mathcal{M}_{\text{main effects}}\right)}{BF_{10}\left(\mathcal{M}_{\text{full}}\right)}=\frac{\frac{p\left(x|H_{1}^{\mathcal{M}_{\text{main effects}}}\right)}{p\left(x|H_{0}^{\mathcal{M}_{\text{null}}}\right)}}{\frac{p\left(x|H_{1}^{\mathcal{M}_{\text{full}}}\right)}{p\left(x|H_{0}^{\mathcal{M}_{\text{null}}}\right)}}\\ &=\frac{p\left(x|H_{1}^{\mathcal{M}_{\text{main effects}}}\right)}{p\left(x|H_{1}^{\mathcal{M}_{\text{full}}}\right)}=BF_{10}\left(\mathcal{M}_{\text{main effects vs. full}}\right) \end{array} $$
Q-Q-plots for the traditional and Bayesian ANOVA for the heart rate dataset of Moore and colleagues produced by JASP
Table 2 Test for Equality of Variances (Levene's)
Table 3 Model comparison
because the denumerators \(p\left (x|H_{0}^{\mathcal {M}_{\text {null}}}\right)\) cancel each other out, so that dividing the main effects model Bayes factor \(BF_{10}\left (\mathcal {M}_{\text {main effects}}\right)=9.207e+124\) by the full models Bayes factor \(BF_{10}\left (\mathcal {M}_{\text {full}}\right)=3.463e+125\) yields a Bayes factor \(BF_{10}\left (\mathcal {M}_{\text {main effects vs. full}}\right)\approx 0.2658677\) for comparing the main effects model to the full model, which also indicates that the full model is to be preferred. This Bayes factor can also be calculated in JASP by selecting compare to best model instead of compare to null model in the user interface. Figure 2b shows a Q-Q-plot for the residuals of the Bayesian ANOVA, showing that it is quite robust to the deviations from normality.
A compelling feature of the Bayesian statistical philosophy now is that posterior credible intervals on all variables of interest are easily obtained. While often frequentist confidence intervals are interpreted as containing the true parameter θ with 95% probability, this is actually the correct interpretation of a Bayesian credible interval, after observing the data x. Table 4 shows the model averaged posterior summaries of the full model for both variables and the interaction term.
Table 4 Model averaged posterior summary
From the table, one can easily see that females have a posterior mean of 7.448, that is an increased heart rate of 7.448 beats per minute, while males have a posterior mean of −7.448, indicating a decreased heart rate of the same magnitude compared to the global mean. Thus, the heart rate seems to be differing between males and females. Specifically, for females with 95% probability after observing the data x the average heart beat lies in the range of values [6.339,8.553], so that with 95% we can be sure that females have an increased heart rate of at least 6.339≈6 beats per minute after exercising 6 minutes compared to the global mean. The 95% credible intervals of males and females do not overlap, so we can be quite confident that there is a true difference.
Other inferences are obtained in identical manner from Table 4. Note that the frequentist MLE estimates and confidence intervals cannot offer this flexibility. The values in Table 4 can also be obtained as plots in JASP, showing the posterior densities, see Fig. 3a-c.
Posterior plots for all variables and interaction terms for the heart rate data of Moore and colleagues produced by JASP
Question (2) – paired samples t-test
Another common situation in medical research is the paired samples t-test which compares the means μ1 and μ2 of the same population at two different timepoints (pre-treatment vs. after treatment). The dataset used is again from Moore and colleagues [35], and provides the number of disruptive behaviours by dementia patients during two different phases of the lunar cycle. The hypothesis tested is H0: "Average number of disruptive behaviours in patients with dementia does not differ between full moon and other days" against the alternative H1 of a differing average numbers of disruptive behaviours. Table 5 shows the results of the frequentist paired-samples t-test, indicating with p<.001 that H0 can be rejected. The paired samples t-test therefore suggests that the data (or more extreme data) are unlikely to be observed if the average number of disruptive behaviours was identical during full moon days and other days in patients with dementia. Note that this is not what researchers actually want to know: The desired answer is which hypothesis is more probable after observing the data, which is exactly quantified by the posterior odds \(\mathbb {P}(H_{1}|x)/\mathbb {P}(H_{0}|x)\), of which the BF10 is a key ingredient (remember that the posterior odds are the product of the Bayes factor and the prior odds). A large BF10 therefore necessitates a change in beliefs towards H1. Assumption checks include a Shapiro-Wilk test on normality, which is not significant with p=.148. Now, the Bayesian paired-samples t-test shown in Table 6 yields BF10=1521.058, indicating extreme evidence for H1. JASP produces also a plot of the prior and posterior distribution of the effect size δ according to Cohen [25], which is of interest in most medical research settings [29].
Table 5 Paired samples T-Test
Table 6 Bayesian Paired Samples T-Test
Figure 4a shows this prior and posterior plot of the effect size δ as well as the produced BF10. A large advantage of the Bayesian paradigm reveals itself here: The posterior of the effect size δ precisely estimates which effect size is most probable after observing the data x. The frequentist paired-samples t-test did not yield any information about the effect size. Although the test was significant, it did not state anything about whether the observed effect is small, medium or large. The prior-posterior plot shows how the prior probability mass is reallocated to the posterior via observing the data and shows that with 95% probability, the true effect size δ is in [0.818,2.345] and the posterior median is 1.527, indicating a large effect. Another benefit is given by the robustness check plot given in Fig. 4b: Different prior distribution widths are used for the effect size δ and the Bayes factor BF10 is computed. Specifically, the prior width of the Cauchy prior C(0,γ) on the effect size δ is increased gradually, showing how the prior shape influences the resulting BF10. Figure 4b shows that even when changing the prior from the user prior, which equals a medium \(C(0,\sqrt {2}/2)\) prior, to a wide C(0,1) or even ultrawide \(C(0,\sqrt {2})\) prior, the Bayes factor for H1 stays above 1000. Thus, the influence of the prior is negligible here, so that only an inconsequential amount of subjectivity goes into the analysis.
Prior and posterior plot and robustness check for the heart dementia data of Moore and colleagues produced by JASP
Question (3) – linear regression
One of the most widespread methods in biomedical research and clinical trials is linear regression [4]. The dataset used here is from Mestek, Plaisance and Grandjean [38] published in the Journal of American College Health. The study provided 100 participants' Body Mass Index (BMI) and average daily number of steps, investigating this relationship with linear regression models.
A traditional linear regression with the BMI as dependent variable and the average number of daily steps (in thousands) of participants as explanatory variable yields the results given in Table 7. The table shows that physical activity (PA) is a significant predictor of the BMI of participants, as p<.001. While JASP also offers to provide confidence intervals, these are counterintuitive to interpret, and therefore the Bayesian linear regression given in Table 8 is preferred. Again, the change from prior to posterior odds for the model BFM and the Bayes factor for the alternative BF10 are given, as well as the models prior probability P(M) and the posterior model probability P(M|data) after observing the data. One can conclude from the results, that the BFM=284.327 of the physical activity model shows extreme evidence for the model including the variable. Also, the identical BF10 for the alternative H1 relative to H0, where H1 states that the regression coefficient for the PA variable differs from zero, shows that the coefficient for the variable is most probable non-zero. The null hypothesis H0 of a regression coefficient of size zero for the PA variable can thus be rejected based on this result, and even better, the alternative H1 can be regarded as confirmed, which would not be allowed when using p-values because accepting hypotheses is generally not allowed in frequentist NHST when interpreted in the sense of Ronald Fisher's significance testing. Note that when interpreted from the Neyman-Pearson theory of hypothesis testing, accepting a hypothesis is allowed, but as the Neyman-Pearson theory is only concerned with long-term type I error control, nothing can be said about the hypothesis tested in the performed study or experiment. As Neyman and Pearson (see [39], p. 291) state explicitly, their theory "tells us nothing as to whether in a particular case H is true". Also, the PA model explains 15% of the variance observed in the data as can be seen from Table 8. Again in this situation, Table 9 shows the posterior summary of coefficients for the Bayesian linear regression, yielding 95% credible intervals so that inference about the most probable range of coefficient values given the data x can be made. Figure 5a shows a plot of the posterior coefficients obtained from the Bayesian linear regression for the BMI data produced by JASP. The Mean and 95% credible intervals are shown, indicating that the PA coefficient is with 95% probability smaller than −0.326, compare Table 9. Figure 5b shows a residual plot to check the assumption of normally distributed residuals, which seems fine for the Bayesian linear regression model. Note that JASP internally uses the BAS package for R [40] for the computations.
Posterior coefficients with credible intervals and residual plot for the BMI data of Mestek et al. produced by JASP
Table 7 Frequentist linear regression for the BMI data set
Table 8 Bayesian linear regression for the BMI data set
Table 9 Posterior summaries of coefficients
The comparison of NHST and Bayesian methods conducted reveals that the Bayesian approach complements the traditional frequentist tests and provides even richer information for hypothesis testing and parameter estimation. Also, both of these benefits can be achieved with JASP easily.
Not only can Bayes factors be used to quantify the relative evidence for the alternative hypothesis H1 compared to H0 in JASP, but additional parameter estimation with easy to interpret credible intervals makes inference more seamless compared to traditional methods. Also, model comparisons and robustness checks can be included into the main analysis to assess the degree to which the conclusions change with background assumptions like the chosen priors, no matter if a t-test, an analysis of variance or a linear regression model is the method of choice.
Also, detailed plots and visualisations of results are obtained quickly, allowing easier interpretation and communication of analysis results. What is more, a complete analysis in JASP can be saved in a single JASP-file, making it possible to send a conducted analysis to a colleague or even share it publicly. This fosters reproducibility and makes checking results easier for colleagues and reviewers of journals. In contrast, SPSS, Stata or R are less transparent as they often depend on the used libraries and version or require detailed programming knowledge, making reanalysing an original dataset much more complicated and time-consuming.
Bayesian inference in JASP also profits from credible intervals and posterior estimates which are more interpretable than traditional MLE estimates with confidence intervals, and allows for a unified judgement of evidence for a model or hypothesis in form of the Bayes factor. Note that there is a large palette of more options for each method (like prior specification, descriptive statistics, providing BF01 instead of BF10, inclusion probability for coefficients, and so on) not described here due to space reasons. Thus, JASP provides many desirable features for the methods implemented, making it a full-grown alternative to statistics packages like SPSS or Stata while also providing an equally intuitive user interface. A definite advantage of JASP is its ability to conduct a multitude of Bayesian tests in comparison to SPSS or Stata, as well as being free for everyone.
Still, although a good spectrum of statistical tests and methods is available in JASP, there are also limitations. Especially for medical research there are some important methods missing. For example, JASP offers no options for survival analysis, which is essential in clinical trials [41, 42]. Also, more complex generalized linear models are missing, for example there is no Bayesian logistic regression available, a method of large importance for medical research [43]. On the other hand, recently, machine learning algorithms like clustering, penalized regression models, linear discriminant analysis and classification and regression trees have been added in form of a machine learning module.
To review JASP, three worked out examples of common situations in biomedical research were provided in this paper, consisting of an ANOVA, a paired t-test and a linear regression model. Conducting and interpreting an analysis in JASP is straigthforward and guided by an intuitive interface with lots of buttons for explanations, while assumptions of a wide variety of tests can be included into the main analysis via a single mouse click. This is a large benefit to competitors like SPSS or Stata, as these do not offer such a wide range of Bayesian methods and are more complicated, having a steeper learning curve and long manuals.
The program interface, documentation and manuals are intuitive and allow the user to quickly accommodate to JASP. The flexibility gained by including NHST and Bayesian methods is a key advantage of JASP compared to other software, and the performance is flawless as shown by the worked out examples.
In summary, the results show that JASP provides easy access to advanced (Bayesian) statistical methods, and NHST is easily complemented by Bayesian methods. Also, the effect size, often of large relevance in medical research, can be easily estimated in JASP via Bayesian methods for a variety of tests, and this offers another advantage compared to frequentist methods.
In summary, in its current state JASP offers a wide range of suitable tests routinely used in medical research and allows seamless transition from NHST to Bayesian inference. This shift towards Bayesian alternatives for null hypothesis significance testing could substantially improve the reproducibility and validity of biomedical research in science.
Project name: JASP Project home page: https://doi.org/https://jasp-stats.org/Operating system(s): e.g. Platform independent Programming language: C++, R Other requirements: NoneLicense: Free and open source (FOSS) Any restrictions to use by non-academics: None
All datasets analysed are available in the JASP standard installation as demonstration data sets, so these can easily be obtained via installing JASP. All results and analyses have been appended as Supplementary files, too.
BF:
Bayes factor
NHST:
Null hypothesis significance testing
JASP:
Jeffreys'awesome statistics package (software)
ANOVA:
SPSS:
Statistics package for the social sciences (software)
PA:
Altman DG. Statistics in medical journals. Stat Med. 1982; 1(1):59–71. https://doi.org/10.1002/sim.4780010109.
Altman DG, Gore SM, Gardner MJ, Pocock SJ. Statistical guidelines for contributors to medical journals. Br Med J (Clin Res ed.) 1983; 286(6376):1489–93. https://doi.org/10.1136/bmj.286.6376.1489.
Altman DG. Statistics in medical journals: Developments in the 1980s. Stat Med. 1991; 10(12):1897–913. https://doi.org/10.1002/sim.4780101206.
Altman DG. Practical Statistics for Medical Research. Boca Raton: Chapman and Hall; 1991, p. 611.
Fisher RA. Statistical Methods for Research Workers. Edinburgh: Oliver and Boyd; 1925.
Neyman J, Pearson ES. Contributions to the theory of testing statistical hypotheses. Stat Res Mem. 1936; 1:1–37.
Colquhoun D. An investigation of the false discovery rate and the misinterpretation of p-values. R Soc Open Sci. 2014; 1(3):140216. https://doi.org/10.1098/rsos.140216, http://arxiv.org/abs/1407.5296.
Benjamin DJ, Berger JO. Three recommendations for improving the use of p-values. The Am Stat. 2019; 73(sup1):186–91. https://doi.org/10.1080/00031305.2018.1543135.
Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005; 2(8):e124. https://doi.org/10.1371/journal.pmed.0020124.
Wasserstein RL, Lazar NA. The ASA's statement on p-values: context, process, and purpose. The Am Stat. 2016; 70(2):129–33. https://doi.org/10.1080/00031305.2016.1154108, http://arxiv.org/abs/1011.1669.
Wasserstein RL, Schirm AL, Lazar NA. Moving to a world beyond "p<0.05". The Am Stat. 2019; 73(sup1):1–19. https://doi.org/10.1080/00031305.2019.1583913.
Matthews R, Wasserstein R, Spiegelhalter D. The ASA's p-value statement, one year on. Significance. 2017; 14(2):38–41. https://doi.org/10.1111/j.1740-9713.2017.01021.x.
Colquhoun D. The problem with p-values. 2016. https://aeon.co/essays/it-s-time-for-science-to-abandonthe-term-statistically-significant. Accessed 11 Oct 2016.
Ioannidis JPA. What have we (not) learnt from millions of scientific papers with p-values?The Am Stat. 2019; 73:20–5. https://doi.org/10.1080/00031305.2018.1447512.
Colquhoun D. The reproducibility of research and the misinterpretation of p-values. R Soc Open Sci. 2017; 4(12):171085. https://doi.org/10.1098/rsos.171085.
Ioannidis JPA. Why most clinical research is not useful. PLoS Med. 2016; 13(6):1002049. https://doi.org/10.1371/journal.pmed.1002049.
McElreath R, Smaldino PE. Replication, communication, and the population dynamics of scientific discovery. PLoS ONE. 2015; 10(8):1–16. https://doi.org/10.1371/journal.pone.0136088.
JASP Team. JASP (Version 0.12)[Computer software]. 2020. https://jasp-stats.org/.
Jeffreys H. Scientific Inference. Cambridge: Cambridge University Press; 1931.
R Core, Team. R: A language and environment for statistical computing. R Found Stat Comput. 2019. https://www.r-project.org/.
Open Science Foundation. OSF - Open Science Foundation. https://osf.io/. Accessed 25 Oct 2019.
McElreath R. Statistical Rethinking: A Bayesian Course With Examples in R and Stan. Boca Raton: Chapman & Hall, CRC Press; 2016. http://jeb.sagepub.com/cgi/doi/10.3102/1076998616659752.
Robert C, Casella G. Monte Carlo Statistical Methods. New York: Springer; 2004, p. 645.
Kruschke JK. Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan, Second Edition. Oxford: Academic Press; 2015, pp. 1–759. https://doi.org/10.1016/B978-0-12-405888-0.09999-2, http://arxiv.org/abs/arXiv:1011.1669v3.
Cohen J. Statistical Power Analysis for the Behavioral Sciences, 2nd edn. Hillsdale, N.J: Routledge; 1988.
Rouder JN, Speckman PL, Sun D, Morey RD, Iverson G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon Bull Rev. 2009; 16(2):225–37. https://doi.org/10.3758/PBR.16.2.225.
Jeffreys H. Theory of Probability, 3rd edn. Oxford: Oxford University Press; 1961.
Gelman A, Lee D, Guo J. Stan: A probabilistic programming language for Bayesian inference. J Educ Behav Stat. 2015; 40(5):530–43. https://doi.org/10.3102/1076998615606113.
van Doorn J, van den Bergh D, Bohm U, Dablander F, Derks K, Draws T, Evans NJ, Gronau QF, Hinne M, Kucharský Š, Ly A, Marsman M, Matzke D, Raj A, Sarafoglou A, Stefan A, Voelkel JG, Wagenmakers E-J. The JASP guidelines for conducting and reporting a Bayesian analysis. PsyArxiv Preprint. 2019. doi:10.31234/osf.io/yqxfr.
Good IJ. Probability and the Weighing of Evidence. London: Charles Griffin; 1950.
Wagenmakers EJ, Love J, Marsman M, Jamil T, Ly A, Verhagen J, Selker R, Gronau QF, Dropmann D, Boutin B, Meerhoff F, Knight P, Raj A, van Kesteren EJ, van Doorn J, Šmíra M, Epskamp S, Etz A, Matzke D, de Jong T, van den Bergh D, Sarafoglou A, Steingroever H, Derks K, Rouder JN, Morey RD. Bayesian inference for psychology. Part II: Example applications with JASP. Psychon Bull Rev. 2018; 25(1):58–76. https://doi.org/10.3758/s13423-017-1323-7.
Goss-Sampson MA. Statistical analysis in JASP 0.10.2: A guide for students; 2019.
Navarro DJ, Foxcroft DR, Faulkenberry TJ. Learning statistics with JASP: A tutorial for psychology students and other beginners; 2019. https://learnstatswithjasp.com/.
Etz A, Vandekerckhove J. A Bayesian perspective on the reproducibility project: Psychology. PLoS ONE. 2016; 11(2):0149794. https://doi.org/10.1371/journal.pone.0149794.
Moore DS, McCabe GP, Craig BA. Introduction to the Practice of Statistics, 9th edn. New York: Freeman, WH; 2012.
Morey RD, Rouder JN. BayesFactor: Computation of Bayes factors for common designs. 2018. https://cran.r-project.org/package=BayesFactor.
van den Bergh D, van Doorn J, Marsman M, Draws T, van Kesteren E, Derks K, Wagenmakers E. A Tutorial on Conducting and Interpreting a Bayesian ANOVA in JASP. 2019. https://doi.org/10.31234/osf.io/spreb.
Mestek ML, Plaisance E, Grandjean P. The relationship between pedometer-determined and self-reported physical activity and body composition variables in college-aged men and women. J Am Coll Health. 2008; 57(1):39–44. https://doi.org/10.3200/JACH.57.1.39-44.
Neyman J, Pearson ES. On the problem of the most efficient tests of statistical hypotheses. Phil Trans R Soc Lond. A. 1933; 231(694-706):289–337. https://doi.org/10.1098/RSTA.1933.0009.
Clyde M. Bayesian variable selection and model averaging using Bayesian adaptive sampling. R Package Version 1.5.5.R Package Version 1.5.5. 2018.
Klein JP, van Houwelingen HC, Ibrahim JG, Scheike TH. Handbook of survival analysis. Boca Raton: Taylor & Francis; 2014. https://doi.org/10.1201/b16248.
Ibrahim JG, Chen M-H, Sinha D. Bayesian Survival Analysis. New York: Springer; 2001, p. 481.
Faraway JJ. Extending the Linear Model with R : Generalized Linear, Mixed Effects and Nonparametric Regression Models, 2nd edn. New York: Chapman and Hall/CRC; 2016, p. 399. https://doi.org/10.1201/9781315382722.
The author is thankful to the reviewer comments provided by Eric-Jan Wagenmakers and Lynn Kuo on a first version of the manuscript, which helped improving the overall quality of the paper.
Department of Mathematics, University of Siegen, Walter-Flex-Str. 3, Siegen, 57072, Germany
Riko Kelter
The author(s) read and approved the final manuscript.
Correspondence to Riko Kelter.
PDF export of JASP-file for the heart rate data analysis
PDF export of JASP-file for the dementia patient data analysis
PDF export of JASP-file for the physical activity and BMI data analysis
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Kelter, R. Bayesian alternatives to null hypothesis significance testing in biomedical research: a non-technical introduction to Bayesian inference with JASP. BMC Med Res Methodol 20, 142 (2020). https://doi.org/10.1186/s12874-020-00980-6
Bayesian hypothesis testing
Medical decision making
Replication crisis
Data analysis, statistics and modelling
Submission enquiries: [email protected] | CommonCrawl |
\begin{document}
\begin{abstract}
Hex-trees are identified as a particular instance of weighted unary-binary trees. The Horton-Strahler
numbers of these objects are revisited, and, thanks to a substitution that is not immediately intuitive, explicit
results are possible. They are augmented by asymptotic evaluations as well. Furthermore, marked ordered trees
(in bijection to skew Dyck paths) are investigated, followed 3-Motzkin paths and multi-edge trees. The underlying
theme is sequence A002212 in the Encyclopedia of integer sequences \end{abstract}
\maketitle
\section{Introduction}
This paper gives some (mostly new) results about the sequence \begin{equation*} 1, 1, 3, 10, 36, 137, 543, 2219, 9285, 39587, 171369, 751236, 3328218, 14878455,\dots, \end{equation*} which is A002212 in \cite{OEIS}.
Here is the plan about the structures enumerated by this sequence:
Hex-trees \cite{KimStanley}; they are identified as weighted unary-binary trees, with weight one. Apart from left and right branches, as in binary trees, there are also unary branches, and they can come in different colours, here in just one colour. Unary-binary trees played a role in the present authors scientific development, as documented in \cite{FlPr86}, a paper written with the late and great Philippe Flajolet, about the register function (Horton-Strahler numbers) of unary-binary trees. Here, we can offer an improvement, using a ``better'' substitution than in \cite{FlPr86}. The results can now be made fully explicit. As a by-product, this provides a definition and analysis of the Horton-Strahler numbers of Hex-trees. An introductory section (about binary trees) provides all the basics.
Then we move to skew Dyck paths \cite{Deutsch-italy}. They are like Dyck paths, but allow for an extra step $(-1,-1)$, provided that the path does not intersect itself. An equivalent model, defined and described using a bijection, is from \cite{Deutsch-italy}: Marked ordered trees. They are like ordered trees, with an additional feature, namely each rightmost edge (except for one that leads to a leaf) can be coloured with two colours. Since we find this class of trees to be interesting, we analyze two parameters of them: number of leaves and height. While the number of leaves for ordered trees is about $n/2$, it is only $n/10$ in the new model. For the height, the leading term $\sqrt{\pi n}$ drops to $\frac{2}{\sqrt 5}\sqrt{\pi n}$. Of course, many more parameters of this new class of trees could be investigated, which we encourage to do.
The last two classes of structures are multi-edge trees. Our interest in them was already triggered in an earlier publication \cite{HPW}. They may be seen as ordered trees, but with weighted edges. The weights are integers $\ge1$, and a weight $a$ may be interpreted as $a$ parallel edges. The other class are 3-Motzkin paths. They are like Motzkin paths (Dyck paths plus horizontal steps); however, the horizontal steps come in three different colours. A bijection is described. Since 3-Motzkin paths and multi-edge trees
are very much alike (using a variation of the classical rotation correspondence), all the structures that are discussed in this paper can be linked via bijections.
\section{Binary trees and Horton-Strahler numbers}
Binary trees may be expressed by the following symbolic equation, which says that they include the empty tree and trees recursively built from a root followed by two subtrees, which are binary trees: \begin{center}\small
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -4.8,0) { $\mathscr{B}$};
\node at (-3,0) { $=$};
\node(c) at (-1.5,0){ $\qed$};
\node at (0.7,0) {$+$};
\node(d) at (3,1)[s1]{};
\node(e) at (2,-1){ $\mathscr{B}$};
\node(f) at (4,-1){ $\mathscr{B}$};
\path [draw,-,black!90] (d) -- (e) node{};
\path [draw,-,black!90] (d) -- (f) node{};
\end{tikzpicture} \end{center}
Binary trees are counted by Catalan numbers and there is an important parameter \textsf{reg}, which in Computer Science is called the register function. It associates to each binary tree (which is used to code an arithmetic expression, with data in the leaves and operators in the internal nodes) the minimal number of extra registers that is needed to evaluate the tree. The optimal strategy is to evaluate difficult subtrees first, and use one register to keep its value, which does not hurt, if the other subtree requires less registers. If both subtrees are equally difficult, then one more register is used, compared to the requirements of the subtrees. This natural parameter is among Combinatorialists known as the Horton-Strahler numbers, and we will adopt this name throughout this paper.
There is a recursive description of this function: $\textsf{reg}(\square)=0$, and if tree $t$ has subtrees $t_1$ and $t_2$, then \begin{equation*} \textsf{reg}(t)= \begin{cases} \max\{\textsf{reg}(t_1),\textsf{reg}(t_2)\}&\text{ if } \textsf{reg}(t_1)\ne\textsf{reg}(t_2),\\ 1+\textsf{reg}(t_1)&\text{ otherwise}. \end{cases} \end{equation*}
The recursive description attaches numbers to the nodes, starting with 0's at the leaves and then going up; the number appearing at the root is the Horton-Strahler number of the tree. \begin{center}\tiny \begin{tikzpicture} [scale=0.4,inner sep=0.7mm, s1/.style={circle,draw=black!90,thick}, s2/.style={rectangle,draw=black!90,thick}] \node(a) at ( 0,8) [s1] [text=black]{$\boldsymbol{2}$}; \node(b) at ( -4,6) [s1] [text=black]{$1$}; \node(c) at ( 4,6) [s1] [text=black]{$2$}; \node(d) at ( -6,4) [s2] [text=black]{$0$}; \node(e) at ( -2,4) [s1] [text=black]{$1$}; \node(f) at ( 2,4) [s1] [text=black]{$1$}; \node(g) at ( 6,4) [s1] [text=black]{$1$}; \node(h) at ( -3,2) [s2] [text=black]{$0$}; \node(i) at ( -1,2) [s2] [text=black]{$0$}; \node(j) at ( 1,2) [s2] [text=black]{$0$}; \node(k) at ( 3,2) [s2] [text=black]{$0$}; \node(l) at ( 5,2) [s2] [text=black]{$0$}; \node(m) at ( 7,2) [s2] [text=black]{$0$}; \path [draw,-,black!90] (a) -- (b) node{}; \path [draw,-,black!90] (a) -- (c) node{}; \path [draw,-,black!90] (b) -- (d) node{}; \path [draw,-,black!90] (b) -- (e) node{}; \path [draw,-,black!90] (c) -- (f) node{}; \path [draw,-,black!90] (c) -- (g) node{}; \path [draw,-,black!90] (e) -- (h) node{}; \path [draw,-,black!90] (e) -- (i) node{}; \path [draw,-,black!90] (f) -- (j) node{}; \path [draw,-,black!90] (f) -- (k) node{}; \path [draw,-,black!90] (g) -- (l) node{}; \path [draw,-,black!90] (g) -- (m) node{}; \end{tikzpicture} \end{center}
Let $\mathscr{R}_{p}$ denote the family of trees with Horton-Strahler number $=p$, then one gets immediately from the recursive definition: \begin{center}\small \begin{tikzpicture} [inner sep=1.3mm, s1/.style={circle=10pt,draw=black!90,thick}, s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -5,0) { $\mathscr{R}_p$};
\node at (-4,0) { $=$}; \node(a) at (-2,1)[s1]{}; \node(b) at (-3,-1){ $\mathscr{R}_{p-1}$}; \node(c) at (-1,-1){ $\mathscr{R}_{p-1}$}; \path [draw,-,black!90] (a) -- (b) node{}; \path [draw,-,black!90] (a) -- (c) node{}; \node at (0.7,0) {$+$};
\node(d) at (3,1)[s1]{}; \node(e) at (2,-1){ $\mathscr{R}_{p}$}; \node(f) at (4,-1.2){ $\sum\limits_{j<p}\mathscr{R}_{j} $}; \path [draw,-,black!90] (d) -- (e) node{}; \path [draw,-,black!90] (d) -- (f) node{}; \node at (5+0.7,0) {$+$};
\node(dd) at (5.5+3,1)[s1]{}; \node(ee) at (5.5+2,-1.2){ $\sum\limits_{j<p}\mathscr{R}_{j}$}; \node(ff) at (5.5+4,-1){ $\mathscr{R}_{p}$}; \path [draw,-,black!90] (dd) -- (ee) node{}; \path [draw,-,black!90] (dd) -- (ff) node{}; \end{tikzpicture} \end{center}
In terms of generating functions, these equations read as \begin{equation*} R_p(z)=zR_{p-1}^2(z)+2zR_p(z)\sum_{j<p}R_j(z); \end{equation*} the variable $z$ is used to mark the size (i.~e., the number of internal nodes) of the binary tree.
A historic account of these concepts, from the angle of Philippe Flajolet, who was one of the pioneers is \cite{register-introduction}, compare also \cite{ECA-historic}.
Amazingly, the recursion for the generating functions $R_p(z)$ can be solved explicitly! The substitution \begin{equation*} z=\frac{u}{(1+u)^2} \end{equation*} that de Bruijn, Knuth, and Rice~\cite{BrKnRi72} also used, produces the nice expression \begin{equation*} R_p(z)=\frac{1-u^2}{u}\frac{u^{2^p}}{1-u^{2^{p+1}}}. \end{equation*} Of course, once this is \emph{known}, it can be proved by induction, using the recursive formula. For the readers benefit, this will be sketched now.
We start with the auxiliary formula \begin{equation*} \sum_{0\le j<p}\frac{u^{2^j}}{1-u^{2^{j+1}}}=\frac{u}{1-u}-\frac{u^{2^p}}{1-u^{2^{p}}}, \end{equation*} which is easy to prove by induction: For $p=0$, the formula $0=\frac{u}{1-u}-\frac{u}{1-u}$ is correct, and then \begin{align*} \sum_{0\le j<p+1} \frac{u^{2^j}}{1-u^{2^{j+1}}}
&=\frac{u}{1-u}-\frac{u^{2^p}}{1-u^{2^{p}}}+\frac{u^{2^p}}{1-u^{2^{p+1}}}\\
&=\frac{u}{1-u}-\frac{u^{2^p}(1+u^{2^p})}{1-u^{2^{p+1}}}+\frac{u^{2^p}}{1-u^{2^{p+1}}}
=\frac{u}{1-u}-\frac{u^{2^{p+1}}}{1-u^{2^{p+1}}}.
\end{align*} Now the formula for $R_p(z)$ can also be proved by induction. First, $R_0(z)=\frac{1-u^2}{u}\frac{u}{1-u^{2}}=1$, as it should, and \begin{align*} zR_{p-1}^2(z)&+2zR_p(z)\sum_{j<p}R_j(z)\\ &=\frac{u}{(1+u)^2}\frac{(1-u^2)^2}{u^2}\frac{u^{2^{p}}}{(1-u^{2^{p}})^2} +\frac{2u}{(1+u)^2}R_p(z)\sum_{j<p}\frac{1-u^2}{u}\frac{u^{2^j}}{1-u^{2^{j+1}}}\\ &= \frac{u^{2^{p}-1}(1-u)^2}{(1-u^{2^{p}})^2} +\frac{2(1-u)}{(1+u)}R_p(z)\sum_{j<p}\frac{u^{2^j}}{1-u^{2^{j+1}}}. \end{align*} Solving \begin{align*} R_p(z)= \frac{u^{2^{p}-1}(1-u)^2}{(1-u^{2^{p}})^2} +\frac{2(1-u)}{(1+u)}R_p(z)\bigg[\frac{u}{1-u}-\frac{u^{2^p}}{1-u^{2^{p}}}\bigg] \end{align*} leads to \begin{align*} R_p(z)\frac{1-u}{1+u}\bigg[1+2\frac{u^{2^p}}{1-u^{2^{p}}}\bigg]= \frac{u^{2^{p}-1}(1-u)^2}{(1-u^{2^{p}})^2}, \end{align*} or, simplified \begin{align*} R_p(z)= \frac{u^{2^{p}-1}(1-u^2)}{(1-u^{2^{p}})(1+u^{2^{p}})} =\frac{1-u^2}{u}\frac{u^{2^{p}}}{(1-u^{2^{p+1}})}, \end{align*} which is the formula that we needed to prove. \qed
\section{Unary-binary trees and Hex-trees} The family of unary-binary trees ${\mathscr{M}}$ might be defined by the symbolic equation \begin{center}
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -12.8,0.1) { ${\mathscr{M}}$};
\node at (-11.2,0) { $=$};
\node at (-4.5,0) { $+$};
\node(a) at (-2,1)[s1]{};
\node(b) at (-3,-1){ ${\mathscr{M}}$};
\node(c) at (-1,-1){ ${\mathscr{M}}$};
\path [draw,-,black!90] (a) -- (b) node{};
\path [draw,-,black!90] (a) -- (c) node{};
\node(a1) at (-6.5,1)[s1]{};
\node(b1) at (-6.5,-1){ $ {\mathscr{M}}\setminus\{\square\}$};
\path [draw,-,black!90] (a1) -- (b1) node{};
\node at (-9.0,0) { $\square\ \ +$};
\end{tikzpicture} \end{center} The equation for the generating function is \begin{equation*} M=1+z(M-1)+zM^2 \end{equation*}
with the solution
\begin{equation*} M=M(z)=\frac{1-z-\sqrt{1-6z+5z^2}}{2z}=1+z+3{z}^{2}+10{z}^{3}+36{z}^{4}+\cdots;
\end{equation*} the coefficients form again sequence A002212 in \cite{OEIS} and enumerate Schr\"oder paths, among many other things. We will come to equivalent structures a bit later.
In the instance of unary-binary trees, we can also work with a substitution: Set $z=\frac{u}{1+3u+u^2}$, then $M(z)=1+u$. Unary-binary trees and the register function were investigated in \cite{FlPr86}, but the present favourable substitution was not used. Therefore, in this previous paper, asymptotic results were available but no explicit formulae.
This works also with a weighted version, where we allow unary edges with $a$ different colours. Then \begin{center}
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -12.8,0.1) { ${\mathscr{N}}$};
\node at (-11.5,0) { $=$};
\node at (-4.5,0) { $+$};
\node(a) at (-2,1)[s1]{};
\node(b) at (-3,-1){ ${\mathscr{N}}$};
\node(c) at (-1,-1){ ${\mathscr{N}}$};
\path [draw,-,black!90] (a) -- (b) node{};
\path [draw,-,black!90] (a) -- (c) node{}; \node at (-7.5,0){$a\ \cdot$};
\node(a1) at (-6.5,1)[s1]{};
\node(b1) at (-6.5,-1){ $ {\mathscr{N}}\setminus\{\square\}$};
\path [draw,-,black!90] (a1) -- (b1) node{};
\node at (-9.5,0) { $\square\ \ +$};
\end{tikzpicture} \end{center} and with the substitution $z=\frac{u}{1+(a+2)u+u^2}$, the generating function is beautifully expressed as $N(z)=1+u$. For $a=0$, this covers also binary trees.
We will consider the Horton-Strahler numbers of unary-binary trees in the sequel. The definition is naturally extended by
\begin{center}
\begin{tikzpicture}
[inner sep=0.6mm, s1/.style={circle=1pt,draw=black!90,thick}] \node[] at ( -0.300,-0.10) {\textsf{reg}\bigg(}; \node[] at ( 0.6500,-0.10) {\bigg)}; \node[] at ( 1.5500,-0.10) {=\ \textsf{reg}(t).}; \path [draw,-,black!90 ] (0.3,0.34) -- (0.3,-0.350) ; \node [s1]at ( 0.300,0.4) { }; \node[] at ( 0.300,-0.60) {$t$ };
\end{tikzpicture} \end{center}
Now we can move again to $R_p(z)$, the generating funciton of (generalized) unary-binary trees with Horton-Strahler number $=p$. The recursion (for $p\ge1$) is \begin{center}\small
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -5,0) { $\mathscr{R}_p$};
\node at (-4,0) { $=$};
\node(a) at (-2,1)[s1]{};
\node(b) at (-3,-1){ $\mathscr{R}_{p-1}$};
\node(c) at (-1,-1){ $\mathscr{R}_{p-1}$};
\path [draw,-,black!90] (a) -- (b) node{};
\path [draw,-,black!90] (a) -- (c) node{};
\node at (0.7,0) {$+$};
\node(d) at (3,1)[s1]{};
\node(e) at (2,-1){ $\mathscr{R}_{p}$};
\node(f) at (4,-1.2){ $\sum\limits_{j<p}\mathscr{R}_{j} $};
\path [draw,-,black!90] (d) -- (e) node{};
\path [draw,-,black!90] (d) -- (f) node{};
\node at (5+0.7,0) {$+$};
\node(dd) at (5.5+3,1)[s1]{};
\node(ee) at (5.5+2,-1.2){ $\sum\limits_{j<p}\mathscr{R}_{j}$};
\node(ff) at (5.5+4,-1){ $\mathscr{R}_{p}$};
\path [draw,-,black!90] (dd) -- (ee) node{};
\path [draw,-,black!90] (dd) -- (ff) node{};
\node(dd) at (13,1)[s1]{};
\node(ee) at (13,-1){ $\mathscr{R}_{p}$};
\path [draw,-,black!90] (dd) -- (ee) node{};
\node at (11.5,0) {$+\ \ a\cdot$};
\end{tikzpicture} \end{center} In terms of generating functions, these equations read as \begin{equation*} R_p(z)=zR_{p-1}^2(z)+2zR_p(z)\sum_{j<p}R_j(z)+azR_p(z), \quad p\ge1;\quad R_0(z)=1. \end{equation*} Amazingly, with the substitution $z=\frac{u}{1+(a+2)u+u^2}$, formally we get the \emph{same} solution as in the binary case: \begin{equation*} R_p(z)=\frac{1-u^2}{u}\frac{u^{2^p}}{1-u^{2^{p+1}}}. \end{equation*} The proof by induction is as before. One sees another advantage of the substitution: On a formal level, many manipulations do not need to be repeated. Only when one switches back to the $z$-world, things become different.
Now we move to Hex-trees. \begin{center}
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -12.4,0.1) { ${\mathscr{H}}$};
\node at (-11.0,0) { $=$};
\node(a) at (-1,1)[s1]{};
\node(b) at (-3,-1){ ${\mathscr{H}\setminus\{\square\}}$};
\node(c) at (1,-1){ ${\mathscr{H}\setminus\{\square\}}$};
\path [draw,-,black!90] (a) -- (b) node{};
\path [draw,-,black!90] (a) -- (c) node{};
\begin{scope}[xshift=13cm] \node at (-10.0,0) { $+$}; \node at (-4.5,0) { $+$}; \node(a1) at (-6.5,1)[s1]{}; \node(b1) at (-7.5,-1){ $ {\mathscr{H}}\setminus\{\square\}$}; \path [draw,-,black!90] (a1) -- (b1) node{}; \end{scope}
\begin{scope}[xshift=21cm]
\node(a1) at (-6.5,1)[s1]{}; \node(b1) at (-5.5,-1){ $ {\mathscr{H}}\setminus\{\square\}$}; \path [draw,-,black!90] (a1) -- (b1) node{}; \end{scope}
\begin{scope}[xshift=17cm]
\node at (-4.5,0) { $+$};
\node(a1) at (-6.5,1)[s1]{};
\node(b1) at (-6.5,-1){ $ {\mathscr{H}}\setminus\{\square\}$};
\path [draw,-,black!90] (a1) -- (b1) node{};
\end{scope}
\node at (-9.0,0) { $\square\ \ +$};
\node[s1] at (-7.0,0) { };
\node at (-5.0,0) {$+$ };
\end{tikzpicture} \end{center}
Hex trees either have two non-empty successors, or one of 3 types of unary successors (called left, middle, right). The author has seen this family first in \cite{KimStanley}, but one can find older literature following the references and the usual search engines.
The generating function satisfies \begin{align*} H&(z)=1+z(H(z)-1)^2+z+3z(H(z)-1)=\frac{1-z-\sqrt{(1-z)(1-5z)}}{2z}\\ &=1+z+3{z}^{2}+10{z}^{3}+36{z}^{4}+137{z}^{5}+543{z}^{6}+2219 {z}^{7}+9285{z}^{8}+39587{z}^{9}+\cdots. \end{align*} The same generating function also appears in \cite{HPW}, and it is again sequence A002212 in \cite{OEIS}. One can rewrite the symbolic equation as \begin{center}
\begin{tikzpicture}
[inner sep=1.3mm,
s1/.style={circle=10pt,draw=black!90,thick},
s2/.style={rectangle,draw=black!50,thick},scale=0.5]
\node at ( -12.4,0.1) { ${\mathscr{H}}$};
\node at (-11.0,0) { $=$};
\begin{scope}[xshift=-4cm]
\node(a) at (-1,1)[s1]{};
\node(b) at (-3,-1){ $\mathscr{H}$};
\node(c) at (1,-1){ $\mathscr{H}$};
\path [draw,-,black!90] (a) -- (b) node{};
\path [draw,-,black!90] (a) -- (c) node{};
\end{scope}
\begin{scope}[xshift=7.5cm]
\node at (-8.5,0) { $+$};
\node(a1) at (-6.5,1)[s1]{};
\node(b1) at (-6.5,-1){ $ {\mathscr{H}}\setminus\{\square\}$};
\path [draw,-,black!90] (a1) -- (b1) node{};
\end{scope}
\node at (-9.0,0) { $\square\ \ \ +$};
\end{tikzpicture} \end{center} and sees in this way that the Hex-trees are unary-binary trees (with parameter $a=1$).
\subsection*{Continuing with enumerations}
First, we will enumerate the number of (generalized) unary-binary trees with $n$ (internal) nodes. For that we need the notion of generalized trinomial coefficients, viz. \begin{equation*} \binom{n;1,a,1}{k}:=[z^k](1+az+z^2)^n. \end{equation*} Of course, for $a=2$, this simplifies to a binomial coefficient $\binom{2n}{k}$. We will use contour integration to pull out coefficients, and the contour of integer, in whatever variable, is a small circle (or equivalent) around the origin. The desired number is \begin{align*} [z^n](1+u)&=\frac1{2\pi i}\oint \frac{dz}{z^{n+1}}(1+u)\\ &=\frac1{2\pi i}\oint \frac{du(1-u^2)(1+(a+2)u+u^2)^{n+1}}{(1+(a+2)u+u^2)^2u^{n+1}}(1+u)\\ &=[u^{n+1}](1-u)(1+u)^2(1+(a+2)u+u^2)^{n-1}\\ &=\binom{n-1;1,a+2,1}{n+1}+\binom{n-1;1,a+2,1}{n}\\* &\hspace*{4cm}-\binom{n-1;1,a+2,1}{n-1}-\binom{n-1;1,a+2,1}{n-2}. \end{align*} Then we introducte $S_p(z)=R_{p}(z)+R_{p+1}(z)+R_{p+2}(z)+\cdots$, the generating function of trees with Horton-Strahler number $\ge p$. Using the summation formula proved earlier, we get \begin{equation*} S_p(z)=\frac{1-u^2}{u}\frac{u^{2^p}}{1-u^{2^{p}}}= \frac{1-u^2}{u}\sum_{k\ge1}u^{k2^p}. \end{equation*} Further, \begin{align*} [z^n]S_p(z)&=\sum_{k\ge1}\frac1{2\pi i}\oint \frac{dz}{z^{n+1}}\frac{1-u^2}{u}u^{k2^p}. \end{align*}
\subsection*{Asymptotics}
We start by deriving asymptotics for the number of (generalized) unary-binary trees with $n$ (internal) nodes. This is a standard application of singularity analysis of generating functions, as described in \cite{FlOd90} and \cite{FS}.
We start from the generating function \begin{equation*}
N(z)=\frac{1-az-\sqrt{1-2(a+2)z+a(a+4)z^2}}{2z} \end{equation*} and determine the singularity closest to the origin, which is the value making the square root disappear:
$z=\frac1{a+4}$. After that, the local expansion of $N(z)$ around this singularity is determined: \begin{equation*} N(z) \sim 2-\sqrt{a+4}\sqrt{1-(a+4)z}. \end{equation*} The translation lemmas given in \cite{FlOd90} and \cite{FS} provide the asymptotics: \begin{align*}
[z^n]N(z)&\sim [z^n]\Big(2-\sqrt{a+4}\sqrt{1-(a+4)z}\Big)\\&
=-\sqrt{a+4}(a+4)^n\frac{n^{-3/2}}{\Gamma(-\frac12)}=(a+4)^{n+1/2}\frac{1}{2\sqrt\pi n^{3/2}}. \end{align*} Just note that $a=0$ is the well-known formula for binary trees with $n$ nodes.
Now we move to the generating function for the average number of registers. Apart from normalization it is \begin{align*} \sum_{p\ge1}pR_p(z)&=\sum_{p\ge1}S_p(z)=\frac{1-u^2}{u}\sum_{p\ge1}\sum_{k\ge1}u^{k2^p}\\ &=\frac{1-u^2}{u}\sum_{n\ge1}v_2(n)u^n, \end{align*} where $v_2(n)$ is the highest exponent $k$ such $2^k$ divides $n$.
This has to be studied around $u=1$, which, upon setting $u=e^{-t}$, means around $t=0$. Eventually, and that is the only thing that is different here, this is to be retranslated into a singular expansion of $z$ around its singularity, which depends on the parameter $a$.
For the reader's convenience, we also repeat the steps that were known before. The first factor is elementary: \begin{equation*} \frac{1-u^2}{u}\sim2t+{\frac {1}{3}}{t}^{3}+\cdots \end{equation*} For \begin{equation*} \sum_{p\ge1}\sum_{k\ge1}e^{-k2^pt}, \end{equation*} one applies the Mellin transform, with the result \begin{equation*} \frac{\Gamma(s)\zeta(s)}{2^s-1}. \end{equation*} Applying the inversion formula, one finds \begin{equation*} \sum_{p\ge1}\sum_{k\ge1}e^{-k2^pt}=\frac1{2\pi i}\int_{2-i\infty}^{2+i\infty}t^{-s}\frac{\Gamma(s)\zeta(s)}{2^s-1}ds. \end{equation*} Shifting the line of integration to the left, the residues at the poles $s=1$, $s=0$, $s=\chi_k=\frac{2k\pi i}{\log2}$, $k\neq0$ provide enough terms for our asymptotic expansion. \begin{equation*} \frac1{t}+{\frac {\gamma}{2\log2 }}-\frac14- \frac {\log \pi }{2\log2 }+\frac {\log t }{2\log2} +\frac1{\log2}\sum_{k\neq0}\Gamma(\chi_k)\zeta(\chi_k)t^{-\chi_k}. \end{equation*} Combined with the elementary factor, this leads to \begin{equation*} 2+\Big(\frac {\gamma}{\log2 }-\frac12-\frac {\log \pi }{\log2 }+\frac {\log t }{\log2}\Big)t+\frac{2t}{\log2}\sum_{k\neq0}\Gamma(\chi_k)\zeta(\chi_k)t^{-\chi_k}+O(t^2\log t). \end{equation*} Now we want to translate into the original $z$-world. Since $z=\frac{u}{1+(a+2)u+u^2}$, $u=1$ translates into the singularity $z=\frac{1}{4+a}$. Further, \begin{equation*} t\sim \sqrt{4+a}\cdot \sqrt{1-z(4+a)}, \end{equation*} let us abbreviate $A=4+a$, then for singularity analysis we must consider \begin{align*} &\frac {\sqrt{A}\cdot \sqrt{1-zA}\log (1-zA) }{2\log2}\\ &+ \Big(\frac {\gamma}{\log2 }-\frac12-\frac {\log \pi }{\log2 }+\frac{\log A}{2\log 2}\Big)\sqrt{A}\cdot \sqrt{1-zA}\\ &+\frac{2 }{\log2}\sum_{k\neq0}\Gamma(\chi_k)\zeta(\chi_k) A^{\frac{1-\chi_k}2}(1-zA)^{\frac{1-\chi_k}2}. \end{align*} The formula that is perhaps less known and needed here is \cite{FlOd90} \begin{align*} [z^n]\log(1-z)\sqrt{1-z}\sim \frac{n^{-3/2}\log n}{2\sqrt \pi}+\frac{n^{-3/2}}{2\sqrt \pi}(-2+\gamma +2\log2); \end{align*} furthermore we need \begin{equation*} [z^n](1-z)^\alpha \sim \frac{n^{-\alpha-1}}{\Gamma(-\alpha)}. \end{equation*} We start with the most complicated term: \begin{align*} \frac{[z^n]\frac {\sqrt{A}\cdot \sqrt{1-zA}\log (1-zA) }{2\log2}}{[z^n]N(z)} &\sim \frac {\sqrt{A}}{2\log2}\frac{A^n\Big(\frac{n^{-3/2}\log n}{2\sqrt \pi}+\frac{n^{-3/2}}{2\sqrt \pi}(-2+\gamma +2\log2)\Big)} {A^{n+1/2}\frac{1}{2\sqrt\pi n^{3/2}}}\\ &= \log_4 n+1+ \frac{\gamma }{2\log2}- \frac{1}{\log2}. \end{align*} The next term we consider is \begin{align*} \Big(\frac {\gamma}{\log2 }-\frac12-\frac {\log \pi }{\log2 }+\frac{\log A}{2\log 2}\Big)&\sqrt{A}\frac{[z^n] \sqrt{1-zA}}{[z^n]N(z)}\\* &\sim \Big(\frac {\gamma}{\log2 }-\frac12-\frac {\log \pi }{\log2 }+\frac{\log A}{2\log 2}\Big)\sqrt{A}\frac{[z^n] \sqrt{1-zA}}{-\sqrt{A}[z^n]\sqrt{1-zA}}\\ &=-\frac {\gamma}{\log2 }+\frac12+\frac {\log \pi }{\log2 }-\frac{\log A}{2\log 2}. \end{align*} The last term we consider is \begin{align*} \frac{2 }{\log2}&\Gamma(\chi_k)\zeta(\chi_k) A^{\frac{1-\chi_k}2}\frac{[z^n](1-zA)^{\frac{1-\chi_k}2}}{-\sqrt{A}[z^n]\sqrt{1-zA}}\\ &\sim-\frac{4 \sqrt\pi }{\log2}\frac{\Gamma(\chi_k)\zeta(\chi_k)}{\Gamma\big(\frac{\chi_k-1}{2}\big)} A^{\frac{1-\chi_k}2}n^{\chi_k/2}. \end{align*} Eventually we have evaluated the average value of the Horton-Strahler numbers: \begin{theorem} \begin{align*} \log_4 n&- \frac{\gamma }{2\log2}- \frac{1}{\log2}+\frac32+\frac {\log \pi }{\log2 }-\frac{\log A}{2\log 2} -\frac{4 \sqrt{\pi A} }{\log2}\sum_{k\neq0}\frac{\Gamma(\chi_k)\zeta(\chi_k)}{\Gamma\big(\frac{\chi_k-1}{2}\big)} A^{\frac{-\chi_k}2}n^{\chi_k/2}\\ &=\log_4 n- \frac{\gamma }{2\log2}- \frac{1}{\log2}+\frac32+\frac {\log \pi }{\log2 }-\frac{\log A}{2\log 2}+\psi(\log_4n), \end{align*} with a tiny periodic function $\psi(x)$ of period 1. \end{theorem}
\section{Marked ordered trees}
In \cite{Deutsch-italy} we find the following variation of ordered trees: Each rightmost edge might be marked or not, if it does not lead to an endnode (leaf). We depict a marked edge by the red colour and draw all of them of size 4 (4 nodes): \begin{figure}
\caption{All 10 marked ordered trees with 4 nodes.}
\end{figure}
Now we move to a symbolic equation for the marked ordered trees: \begin{figure}
\caption{Symbolic equation for marked ordered trees.\\ $\mathscr{A}\cdots\mathscr{A}$ refers to $\ge0$ copies of $\mathscr{A}$.}
\end{figure}
In terms of generating functions, \begin{equation*} A=z+\frac{z}{1-A}z+\frac{z}{1-A}2(A-z), \end{equation*} with the solution \begin{equation*} A(z)=\frac{1-z-\sqrt{1-6z+5z^2}}{2}=z+z^2+z^3+3z^3+10z^4+36z^5+\cdots. \end{equation*}
The importance of this family of trees lies in the bijection to skew Dyck paths, as given in \cite{Deutsch-italy}. One walks around the tree as one usually does and translates it into a Dyck path. The only difference are the red edges. On the way down, nothing special is to be reported, but on the way up, it is translated into a skew step $(-1,-1)$. The present author believes that trees are more manageable when it comes to enumeration issues than skew Dyck paths.
The 10 trees of Figure 1 translate as follows: \begin{equation*}
\begin{tikzpicture}[scale=0.3]
\draw (0,0)--(3,3)--(6,0);
\end{tikzpicture} \quad \begin{tikzpicture}[scale=0.3]
\draw (0,0)--(3,3)--(5,1)--(4,0); \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.3]
\draw (0,0)--(3,3)--(4,2)--(3,1)--(4,0); \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.3]
\draw (0,0)--(3,3)--(4,2)--(3,1)--(2,0); \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.3]
\draw (0,0)--(2,2)--(4,0)--(5,1)--(6,0); \end{tikzpicture} \end{equation*} \begin{equation*} \begin{tikzpicture}[scale=0.3]
\draw (0,0)--(1,1)--(2,0)--(4,2)--(6,0); \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.3]
\draw (0,0)--(1,1)--(2,0)--(4,2)--(5,1)--(4,0); \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.3]
\draw (0,0)--(1,1)--(2,2)--(3,1)--(4,2)--(6,0); \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.3]
\draw (0,0)--(1,1)--(2,2)--(3,1)--(4,2)--(5,1)--(4,0); \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.3]
\draw (0,0)--(1,1)--(2,0)--(3,1)--(4,0)--(5,1)--(6,0); \end{tikzpicture} \end{equation*}
\section{Parameters of marked ordered trees}
There are many parameters, usually considered in the context of ordered trees, that can be considered for marked ordered trees. Of course, we cannot be encyclopedic, also, to keep the balance between the other structures that are considered in this paper. We just consider a few parameters and leave further analysis to the future.
\subsection*{The number of leaves}
To get this, it is most natural to use an additional variable $u$ when translating the symbolic equation, so that $z^nu^k$ refers to trees with $n$ nodes and $k$ leaves. One obtains \begin{equation*} F=zu+\frac{z}{1-F}\bigl(zu2(F-zu)\bigr), \end{equation*} with the solution \begin{align*} F(z,u)&=-z+\frac{zu}2+\frac12-\frac12\sqrt {4{z}^{2}-4z+{z}^{2}{u}^{2}-2zu+1}\\* &=zu+{z}^{2}u+ \left( 2u+{u}^{2}\right) {z}^{3}+ \left( 4u+5{u}^{2}+{u}^{3} \right) {z}^{4}+\cdots. \end{align*} The factor $4u+5{u}^{2}+{u}^{3}$ corresponds to the 10 trees in Figure 1.
Of interest is also the average number of leaves, when all marked ordered trees of size $n$ are considered to be equally likely. For that, we differentiate $F(z,u)$ w. r. t. $u$, followed by $u:=1$, with the result \begin{equation*} \frac{z}{2}+\frac{z-z^2}{2\sqrt{1-6z+5z^2}}=\frac{z}{1-v}, \quad\text{with the usual}\quad z=\frac{v}{1+3v+v^2}. \end{equation*} Since $F(z,1)=z(1+v)$, it follows that the average is asymptotic to \begin{align*} \frac{[z^{n+1}]\frac{z}{1-v}}{[z^{n+1}]z(1+v)}&=\frac{[z^{n}]\frac{1}{1-v}}{[z^{n}](1+v)}=\frac{[z^n]\frac1{\sqrt5}\frac1{\sqrt{1-5z}}}{5^{n+\frac12}\frac1{2\sqrt\pi}n^{3/2}}\\ &=\frac{\frac{n^{-1/2}}{\Gamma(\frac12)}}{5^{n+\frac12}\frac1{2\sqrt\pi}n^{3/2}}=\frac{n}{10}. \end{align*} Note that the corresponding number for ordered trees (unmarked) is $\frac n2$, so we have significantly less leaves here.
\subsection*{The height}
As in the seminal paper \cite{BrKnRi72}, we define the height in terms of the longest chain of nodes from the root to a leaf. Further, let $p_h=p_h(z)$ be the generating function of marked ordered trees of height $\le h$. From the symbolic equation, \begin{align*} p_{h+1}=z+\frac{z^2}{1-p_h}+\frac{2z^2(p_h-z)}{1-p_h}=-z+\frac{2z-z^2}{1-p_h},\quad h\ge1,\ p_1=z. \end{align*} By some creative guessing, separating numerator and denominator, we find the solution \begin{equation*} p_h=z(1+v)\frac{(1+2v)^{h-1}-v^h(v+2)^{h-1}}{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}}, \end{equation*} which is easy to prove by induction. The limit for $h\to\infty$ is $z(1+v)$, the generating function of \emph{all} marked ordered trees, as expected. Taking differences, we get the generating functions of trees of height $>h$: \begin{align*} z(1+v)&-z(1+v)\frac{(1+2v)^{h-1}-v^h(v+2)^{h-1}}{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}}\\ &=z(1+v)\frac{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}-(1+2v)^{h-1}+v^h(v+2)^{h-1}}{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}}\\ &=z(1-v^2)\frac{(v+2)^{h-1}v^h}{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}}. \end{align*} From this, the average height can be worked out, as in the model paper \cite{HPW}. We sketch the essential steps. For the average height, one needs \begin{equation*} \sum_{h\ge0}z(1-v^2)\frac{(v+2)^{h-1}v^h}{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}} \end{equation*} and its behaviour around $v=1$, viz. \begin{equation*} 2z(1-v)\sum_{h\ge0}\frac{3^{h-1}v^h}{3^{h-1}-v^{h+1}3^{h-1}} \sim2z(1-v)\sum_{h\ge1}\frac{v^h}{1-v^{h}}. \end{equation*} The behaviour of the series can be taken straight from \cite{HPW}.
We find there \begin{equation*} \sum_{h\ge1}\frac{v^h}{1-v^{h}}=-\frac{\log(1-v)}{1-v}, \end{equation*} and further \begin{equation*}
\sum_{h\ge0}z(1-v^2)\frac{(v+2)^{h-1}v^h}{(1+2v)^{h-1}-v^{h+1}(v+2)^{h-1}} \sim- 2z\log(1-v), \end{equation*} so that the coefficient of $z^{n+1}$ is asymptotic to $-2[z^n]\log(1-v)$. Since $1-v\sim \sqrt5\sqrt{1-5z}$, \begin{equation*} - 2z\log(1-v)\sim -2z\log\sqrt{1-5z}= -z\log(1-5z), \end{equation*} and the coefficient of $z^{n+1}$ in it is asymptotic to $\frac{5^n}{n}$. This has to be divided (as derived earlier) by \begin{equation*} 5^{n+\frac12}\frac{1}{2\sqrt\pi n^{3/2}}, \end{equation*} with the result \begin{equation*} 2\frac{5^n}{n}\frac1{5^{n+\frac12}}\sqrt\pi n^{3/2}=\frac{2}{\sqrt5}\sqrt{\pi n}. \end{equation*} Note that the constant in front of $\sqrt{\pi n}$ for ordered trees is $\frac{2}{\sqrt4}=1$, so the average height for marked ordered trees is indeed a bit smaller thanks to the extra markings.
\section{A bijection between multi-edge trees and 3-coloured Motzkin paths}
Multi-edge trees are like ordered (planar, plane, \dots) trees, but instead of edges there are multiple edges. When drawing such a tree, instead of drawing, say 5 parallel edges, we just draw one edge and put the number 5 on it as a label. These trees were studied in \cite{polish, HPW}. For the enumeration, one must count edges. The generating function $F(z)$ satisfies \begin{equation*}
F(z)=\sum_{k\ge0}\Big(\frac{z}{1-z}F(z)\Big)^k=\frac1{1-\frac{z}{1-z}F(z)}, \end{equation*} whence \begin{equation*}
F(z)=\frac{1-z-\sqrt{1-6z+5z^2}}{2z}=1+z+3{z}^{2}+10{z}^{3}+36{z}^{4}+137{z}^{5}+543{z}^{6}+\cdots. \end{equation*} The coefficients form once again sequence A002212 in \cite{OEIS}.
A Motzkin path consists of up-steps, down-steps, and horizontal steps, see sequence A091965 in \cite{OEIS} and the references given there. As Dyck paths, they start at the origin and end, after $n$ steps again at the $x$-axis, but are not allowed to go below the $x$-axis. A 3-coloured Motzkin path is built as a Motzkin path, but there are 3 different types of horizontal steps, which we call \emph{red, green, blue}. The generating function $M(z)$ satisfies \begin{equation*}
M(z)=1+3zM(z)+z^2M(z)^2=\frac{1-3z-\sqrt{1-6z+5z^2}}{2z^2}, \quad\text{or}\quad F(z)=1+zM(z). \end{equation*} So multi-edge trees with $N$ edges (counting the multiplicities) correspond to 3-coloured Motzkin paths of length $N-1$.
The purpose of this note is to describe a bijection. It transforms trees into paths, but all steps are reversible.
\subsection*{The details}
As a first step, the multiplicities will be ignored, and the tree then has only $n$ edges. The standard translation of such tree into the world of Dyck paths, which is in every book on combinatorics, leads to a Dyck path of length $2n$. Then the Dyck path will transformed bijectively to a 2-coloured Motzkin path of length $n-1$ (the colours used are red and green). This transformation plays a prominent role in \cite{Shapiro}, but is most likely much older. I believe that people like Viennot know this for 40 years. I would be glad to get a proper historic account from the gentle readers.
The last step is then to use the third colour (blue) to deal with the multiplicities.
The first up-step and the last down-step of the Dyck path will be deleted. Then, the remaining $2n-2$ steps are coded pairwise into a 2-Motzkin path of length $n-1$: \begin{equation*}
\begin{tikzpicture}[scale=0.3]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (1,1) node(x2) {\tiny$\bullet$};
\path (2,2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (2,2);
\end{tikzpicture}
\raisebox{0.5 em}{$\longrightarrow$}
\begin{tikzpicture}[scale=0.3]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (1,1) node(x2) {\tiny$\bullet$};
\draw (0,0) -- (1,1);
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=0.3]
\path (0,2) node(x1) {\tiny$\bullet$} ;
\path (1,1) node(x2) {\tiny$\bullet$};
\path (2,0) node(x3) {\tiny$\bullet$};
\draw (0,2) -- (2,0);
\end{tikzpicture}
\raisebox{0.5 em}{$\longrightarrow$}
\begin{tikzpicture}[scale=0.3]
\path (0,1) node(x1) {\tiny$\bullet$} ;
\path (1,0) node(x2) {\tiny$\bullet$};
\draw (0,1) -- (1,0);
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=0.3]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (1,1) node(x2) {\tiny$\bullet$};
\path (2,0) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (1,1) -- (2,0);
\end{tikzpicture}
\raisebox{0.5 em}{$\longrightarrow$}
\begin{tikzpicture}[scale=0.3]
\path (0,0) node[red](x1) {\tiny$\bullet$} ;
\path (1,0) node[red](x2) {\tiny$\bullet$};
\draw[red, very thick] (0,0) -- (1,0);
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=0.3]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (1,-1) node(x2) {\tiny$\bullet$};
\path (2,0) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (1,-1) -- (2,0);
\end{tikzpicture}
\raisebox{0.5 em}{$\longrightarrow$}
\begin{tikzpicture}[scale=0.3]
\path (0,0) node[green](x1) {\tiny$\bullet$} ;
\path (1,0) node[green](x2) {\tiny$\bullet$};
\draw[green, very thick] (0,0) -- (1,0);
\end{tikzpicture} \end{equation*}
The last step is to deal with the multiplicities. If an edge is labelled with the number $a$, we will insert $a-1$ horizontal blue steps in the following way: Since there are currently $n-1$ symbols in the path, we have $n$ possible positions to enter something (in the beginning, in the end, between symbols). We go through the tree in pre-order, and enter the multiplicities one by one using the blue horizontal steps.
To make this procedure more clear, we prepared a list of 10 multi-edge trees with 3 edges, and the corresponding 3-Motzkin paths of length 2, with intermediate steps completely worked out:
\begin{center}
\begin{table}[h]
\begin{tabular}{c | c | c |c}
\text{Multi-edge tree }&\text{Dyck path}&\text{2-Motzkin path}&\text{blue edges added}\\
\hline\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,-2) node(x3) {\tiny$\bullet$};
\path (0,-3) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny1} ;
\draw (0,-1) -- (0,-2)node[pos=0.5,left]{\tiny1} ;
\draw (0,-2) -- (0,-3)node[pos=0.5,left]{\tiny 1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (3,3) --(6,0);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.45]
\draw[thick] (0,0) -- (1,1) --(2,0);
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw[thick] (0,0) -- (1,1) --(2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,-2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny2} ;
\draw (0,-1) -- (0,-2)node[pos=0.5,left]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (2,2) --(4,0);
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw [blue,thick](0,0) -- (1,0);
\draw [red,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,-2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny1} ;
\draw (0,-1) -- (0,-2)node[pos=0.5,left]{\tiny2} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (2,2) --(4,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\draw [blue,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny3} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (1,1) --(2,0);
\end{tikzpicture} & & \begin{tikzpicture}[scale=0.45]
\draw [blue,thick](0,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-1,-2) node(x3) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny1} ;
\draw (-1,-1) -- (-1,-2)node[pos=0.3,left]{\tiny1} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (2,2) --(4,0)--(5,1)--(6,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\draw [green,thick](1,0) -- (2,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\draw [green,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny2} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (1,1) --(2,0)--(3,1)--(4,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\end{tikzpicture} & \begin{tikzpicture}[scale=0.45]
\draw [blue,thick](0,0) -- (1,0);
\draw [green,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny1} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny2} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (1,1) --(2,0)--(3,1)--(4,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\draw [blue,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (-1,0) node(x1) {\tiny$\bullet $} ;
\path (-1,1) node(x2) {\tiny$\bullet$};
\path (-2,2) node(x3) {\tiny$\bullet$};
\path (-3,1) node(x4) {\tiny$\bullet$};
\draw (-1,0) -- (-1,1)node[pos=0.7,right]{\tiny1} ;
\draw (-1,1) -- (-2,2)node[pos=0.7,right]{\tiny1} ;
\draw (-2,2) -- (-3,1)node[pos=0.3,left]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (1,1) --(2,0)--(4,2)--(6,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\draw[red,thick](1,0)--(2,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\draw[red,thick](1,0)--(2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (0,-1) node(x3) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny1} ;
\draw (0,0) -- (0,-1)node[pos=0.6]{\tiny\;\;1} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (1,1) --(2,0)--(3,1)--(4,0)--(5,1)--(6,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\draw [green,thick](1,0) -- (2,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [green,thick](0,0) -- (1,0);
\draw [green,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (-1,-2) node(x3) {\tiny$\bullet$};
\path (1,-2) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny1} ;
\draw (0,-1) -- (1,-2)node[pos=0.5,right]{\tiny1} ;
\draw (0,-1) -- (-1,-2)node[pos=0.5,left]{\tiny 1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.45]
\draw (0,0) -- (2,2) --(3,1)--(4,2)--(6,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\draw [red,thick](1,0) -- (2,0);
\end{tikzpicture}& \begin{tikzpicture}[scale=0.45]
\draw [red,thick](0,0) -- (1,0);
\draw [red,thick](1,0) -- (2,0);
\end{tikzpicture}\\
\end{tabular}
\caption{First row is a multi-edge tree with 3 edges, second row is the standard Dyck path (multiplicities ignored), third row is cutting off first and last step, and then translated pairs of steps, fourth row is inserting blued horizontal edges, according to multiplicities.}
\end{table}
\end{center}
\subsection*{Connecting unary-binary trees with multi-edge trees}
This is not too difficult: We start from multi-edge trees, and ignore the multiplicities at the moment. Then we apply the classical rotation correspondence (also called: natural correspondence).
Then we add vertical edges, if the multiplicity is higher than 1. To be precise, if there is a node, and an edge with multiplicity $a$ leads to it from the top, we insert $a-1$ extra nodes in a chain on the top, and connect them with unary branches. The following example with 10 objects will help to understand this procedure.
After that, all the structures studied in this paper are connected with bijections.
\begin{center}
\begin{table}[h]
\begin{tabular}{c | c | c |c}
\text{Multi-edge tree }&\text{Binary tree (rotation)}&\text{vertical edges added}\\
\hline\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,-2) node(x3) {\tiny$\bullet$};
\path (0,-3) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny1} ;
\draw (0,-1) -- (0,-2)node[pos=0.5,left]{\tiny1} ;
\draw (0,-2) -- (0,-3)node[pos=0.5,left]{\tiny 1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-2,-2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1);
\draw (-1,-1) -- (-2,-2) ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-2,-2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1);
\draw (-1,-1) -- (-2,-2) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,-2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny2} ;
\draw (0,-1) -- (0,-2)node[pos=0.5,left]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1);
\end{tikzpicture}
&\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (0,1) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1);
\draw (0,0) -- (0,1) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,-2) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny1} ;
\draw (0,-1) -- (0,-2)node[pos=0.5,left]{\tiny2} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1);
\end{tikzpicture}
&\begin{tikzpicture}[scale=0.5]
\path (-1,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (0,1) node(x3) {\tiny$\bullet$};
\draw (-1,0) -- (-1,-1);
\draw (-1,0) -- (0,1) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny3} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\end{tikzpicture}
&\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (0,1) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (0,-1);
\draw (0,0) -- (0,1) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-1,-2) node(x3) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny1} ;
\draw (-1,-1) -- (-1,-2)node[pos=0.3,left]{\tiny1} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,-2) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-2,-2) node(x3) {\tiny$\bullet$};
\draw (0,-2) -- (-1,-1);
\draw (-1,-1) -- (-2,-2) ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,-2) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-2,-2) node(x3) {\tiny$\bullet$};
\draw (0,-2) -- (-1,-1);
\draw (-1,-1) -- (-2,-2) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny2} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,-2) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\draw (0,-2) -- (-1,-1);
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (1,-1) node(x2) {\tiny$\bullet$};
\path (0,1) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (1,-1);
\draw (0,0) -- (0,1) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny1} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny2} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,-2) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\draw (0,-2) -- (-1,-1);
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (-1,1) node(x3) {\tiny$\bullet$};
\draw (0,0) -- (-1,1);
\draw (0,0) -- (0,-1) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (-1,0) node(x1) {\tiny$\bullet $} ;
\path (-1,1) node(x2) {\tiny$\bullet$};
\path (-2,2) node(x3) {\tiny$\bullet$};
\path (-3,1) node(x4) {\tiny$\bullet$};
\draw (-1,0) -- (-1,1)node[pos=0.7,right]{\tiny1} ;
\draw (-1,1) -- (-2,2)node[pos=0.7,right]{\tiny1} ;
\draw (-2,2) -- (-3,1)node[pos=0.3,left]{\tiny1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (-2,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-2,-2) node(x3) {\tiny$\bullet$};
\draw (-2,0) -- (-1,-1);
\draw (-1,-1) -- (-2,-2) ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (-2,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (-2,-2) node(x3) {\tiny$\bullet$};
\draw (-2,0) -- (-1,-1);
\draw (-1,-1) -- (-2,-2) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (-1,-1) node(x2) {\tiny$\bullet$};
\path (0,-1) node(x3) {\tiny$\bullet$};
\path (1,-1) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (-1,-1)node[pos=0.3,left]{\tiny1} ;
\draw (0,0) -- (0,-1)node[pos=0.6]{\tiny\;\;1} ;
\draw (0,0) -- (1,-1)node[pos=0.3,right]{\tiny1} ;
\end{tikzpicture}&
\begin{tikzpicture}[scale=0.5]
\path (-3,3) node(x1) {\tiny$\bullet$} ;
\path (-1,1) node(x2) {\tiny$\bullet$};
\path (-2,2) node(x3) {\tiny$\bullet$};
\draw (-2,2) -- (-1,1);
\draw (-3,3) -- (-2,2) ;
\end{tikzpicture}
&\begin{tikzpicture}[scale=0.5]
\path (-3,3) node(x1) {\tiny$\bullet$} ;
\path (-1,1) node(x2) {\tiny$\bullet$};
\path (-2,2) node(x3) {\tiny$\bullet$};
\draw (-2,2) -- (-1,1);
\draw (-3,3) -- (-2,2) ;
\end{tikzpicture}
\\
\hline
\begin{tikzpicture}[scale=0.5]
\path (0,0) node(x1) {\tiny$\bullet$} ;
\path (0,-1) node(x2) {\tiny$\bullet$};
\path (-1,-2) node(x3) {\tiny$\bullet$};
\path (1,-2) node(x4) {\tiny$\bullet$};
\draw (0,0) -- (0,-1)node[pos=0.5,left]{\tiny1} ;
\draw (0,-1) -- (1,-2)node[pos=0.5,right]{\tiny1} ;
\draw (0,-1) -- (-1,-2)node[pos=0.5,left]{\tiny 1} ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (-2,0) node(x1) {\tiny$\bullet$} ;
\path (-1,1) node(x2) {\tiny$\bullet$};
\path (-1,-1) node(x3) {\tiny$\bullet$};
\draw (-2,0) -- (-1,-1);
\draw (-2,0) -- (-1,1) ;
\end{tikzpicture}
& \begin{tikzpicture}[scale=0.5]
\path (-2,0) node(x1) {\tiny$\bullet$} ;
\path (-1,1) node(x2) {\tiny$\bullet$};
\path (-1,-1) node(x3) {\tiny$\bullet$};
\draw (-2,0) -- (-1,-1);
\draw (-2,0) -- (-1,1) ;
\end{tikzpicture}\\
\end{tabular}
\caption{First row is a multi-edge tree with 3 edges, second row the corresponding binary tree, according to the classical rotation correspondence, ignoring the unary branches.
Third row is inserting extra horizontal edges when the multiplicities are higher than 1.}
\end{table} \end{center}
\end{document} | arXiv |
\begin{document}
\begin{center} {\LARGE \bf Isogeometric Parametrization Inspired by Large Elastic Deformation} \\[4mm] Alexander Shamanskiy$^1$, Michael Helmut Gfrerer$^1$, Jochen Hinz$^2$ and Bernd Simeon$^1$ \\[1mm] $^1$ TU Kaiserslautern, Dept. of Mathematics\\ $^2$ TU Delft, Dept. of Mathematics \end{center}
\begin{quote} \small {\bf Abstract: } The construction of volumetric parametrizations for computational domains is a key step in the pipeline of isogeometric analysis. Here, we investigate a solution to this problem based on the mesh deformation approach. The desired domain is modeled as a deformed configuration of an initial simple geometry. Assuming that the parametrization of the initial domain is bijective and that it is possible to find a locally invertible displacement field, the method yields a bijective parametrization of the target domain. We compute the displacement field by solving the equations of nonlinear elasticity with the neo-Hookean material law, and we show an efficient variation of the incremental loading algorithm tuned specifically to this application. In order to construct the initial domain, we simplify the target domain's boundary by means of an $L^2$-projection onto a coarse basis and then apply the Coons patch approach. The proposed methodology is not restricted to a single patch scenario but can be utilized to construct multi-patch parametrizations with naturally looking boundaries between neighboring patches. We illustrate its performance and compare the result to other established parametrization approaches on a range of two-dimensional and three-dimensional examples.
{\bf Keywords:} isogeometric analysis, domain parametrization, mesh deformation, nonlinear elasticity. \end{quote}
\section{Introduction} A common problem in isogeometric analysis (IGA) \cite{Hughes2005,Cottrell.2009} is generating a volumetric parametrization for the computational domain when only a description of its boundary is available. In this work, we investigate an approach to solving this problem which is based on mesh deformation. The parametrization for the target domain is acquired as a deformed configuration of a simple initial domain. The approach is related to a class of arbitrary Lagrangian-Eulerian methods in problems of fluid-structure interaction \cite{bazilevs2013,stein2003mesh,crosetto2011fluid} and to the interface tracking methods in free-surface flow problems \cite{zwicke2017boundary}. In the context of IGA, the approach has been applied in order to construct volumetric meshes consisting of a T-spline surface layer and a core of Lagrangian elements \cite{harmel2017volumetric}. Although only small deformations are considered, similar ideas are used to generate curvilinear meshes from piecewise linear triangulations in \cite{persson2009curved}.
We apply the mesh deformation approach to generate high-quality tensor product B-spline and NURBS parametrizations for complicated geometries. It is done by first simplifying the target domain's boundary so that the Coons patch approach \cite{Piegl1997,farin1999discrete} can be applied to produce a bijective and uniform parametrization of the resulting simple geometry. The simplification can be conducted by means of projection in an $L^2$-sense onto a coarse basis; however, a number of ad hoc methods can be applied in every particular situation which makes the approach very flexible. Next, we deform the simplified geometry so that its boundary coincides with the target domain's boundary. We search for the unknown displacement field as a solution to the system of nonlinear elasticity equations with a prescribed boundary displacement. By using the logarithmic neo-Hookean material law, we exclude self-penetrations of the material and preserve the bijectivity of the initial parametrization. Moreover, we can partially preserve the uniformity of the initial parametrization by considering a nearly incompressible material. In order to efficiently solve the nonlinear elasticity equations, we employ a variation of the incremental loading algorithm. It numerically preserves bijectivity of the solution and can operate with an adaptive stepsize.
The problem of generating tensor product B-spline and NURBS parametrizations has received a lot of attention since the introduction of IGA. Let us give a short overview of the state-of-the-art in the field. One of the simplest methods to construct a volumetric parametrization from a boundary description is the Coons patch. Although nothing guaranties that the resulting parametrization is bijective, the method is explicit, and its output can be used as a starting point for more sophisticated parametrization techniques. Nonlinear optimization \cite{Gravesen2014,xu2011parameterization,falini2015planar,xu2013constructing,pan2018low,ugalde2018injectivity} is a popular approach which allows to construct parametrizations optimal with respect to a chosen quality measure; the bijectivity is often enforced as an external constraint. Many other approaches seek to construct the parametrization as an inverse of a bijective mapping from the target domain to the parametric space. Among examples are the inverse of harmonic mappings \cite{nguyen2010parameterization,nguyen2012construction} and elliptic grid generation \cite{hinz2018elliptic}. For domains belonging to a certain class of geometries, specialized techniques have been developed. Examples are swept volumes \cite{aigner2009swept} and star-shaped domains \cite{Arioli2017}.
The rest of this paper is structured as follows. Section 2 gives a brief introduction into continuum mechanics and fixes necessary notation. Section 3 states the parametrization problem and then outlines the mesh deformation approach to its solution in broad brush-strokes. Section 4 deals with the construction of the initial domain, and numerical algorithms for computing the deformation are described in Section 5. A range of 2D and 3D examples is presented in Section 6 as well as a comparison of the results of the mesh deformation approach to other established parametrization techniques. Finally, Section 7 draws a conclusion and outlines further research directions.
\section{Nonlinear elasticity in a nutshell} The following is a brief introduction into continuum mechanics $-$ based on \cite{wriggers2008nonlinear} $-$ where the focus lies on fixing a notation for the ingredients necessary for describing the mesh deformation approach in Section 3.
Let $\Omega_0\subset\mathbb{R}^d$ be a reference configuration of a solid body undergoing a deformation $\pmb{\Phi}$. For each material point $\f{x}\in\Omega_0$, its position in the deformed configuration $\pmb{\Phi}(\f{x}) = \f{y}\in\Omega\subset\mathbb{R}^d$ can be expressed in terms of a displacement vector field $\f{u}:\Omega_0\to\mathbb{R}^d$ such that \begin{equation}\label{eq:disp} \f{y} = \f{x} + \f{u}(\f{x}). \end{equation} Next, the deformation gradient $\f{F}:\Omega_0\to\mathbb{R}^{d\times d}$ is defined as \begin{equation} \f{F} = \nabla_\f{x}\pmb{\Phi} = \f{I} + \nabla_\f{x}\f{u}. \end{equation} Its determinant $J = \operatorname{det}\f{F}$ measures a relative volume change. Since self-penetration during deformation of the body is excluded, the mapping $\pmb{\Phi}$ must be bijective and the condition \begin{equation}\label{eq:J} J(\f{u}) > 0 \end{equation} has to hold. In what follows we often use $J(\f{u})$ and $J(\pmb{\Phi})$ interchangeably.
Let the solid body be subject to volume forces $\f{g}$. From the conservation of linear momentum, it follows that the displacement $\f{u}$ fulfills the equations \begin{equation}\label{eq:momentum} -\operatorname{div}(\f{F}\f{S})(\f{u}) = \f{g} \text{ in }\Omega_0. \end{equation} Here $\f{S}$ is the second Piola-Kirchhoff stress tensor which measures internal forces arising in the deformed solid body in response to the applied external load. Equations (\ref{eq:momentum}) are incomplete unless a relation between $\f{S}$ and $\f{u}$ $-$ called a material law $-$ is defined. In the present paper, we use \begin{equation}\label{eq:neoHook} \f{S}(\f{u}) = \lambda\ln J(\f{u})\f{C}(\f{u})^{-1} + \mu(\f{I}-\f{C}(\f{u})^{-1}), \end{equation} where the right Cauchy-Green tensor is defined as $\f{C} = \f{F}^T\f{F}$. The relation (\ref{eq:neoHook}) constitutes a particular choice of a nonlinear neo-Hookean material law. Note that due to the presence of $\ln J$, any displacement field $\f{u}$ satisfying the equations (\ref{eq:momentum}) with the material law (\ref{eq:neoHook}) grants a bijective deformation $\pmb{\Phi}$, i.e., (\ref{eq:J}) holds. The material law (\ref{eq:neoHook}) includes two constitutive parameters $-$ the so-called Lam\'e constants $\lambda$ and $\mu$ $-$ which can be computed from Young's modulus $E$ and Poisson's ratio $\nu$: \begin{equation} \lambda = \frac{\nu E}{(1+\nu)(1-2\nu)},\hspace{0.3cm} \mu = \frac{E}{2(1+\nu)}. \end{equation} Finally, for the equations (\ref{eq:momentum}) to have a unique solution they have to be equipped with boundary conditions: \begin{align} \f{u} &= \f{u}_\mathcal{D} \text{ on } \partial\Omega_0^\mathcal{D},\\ \f{F}\f{S}\f{n} &= \f{f} \text{ on } \partial\Omega_0^\mathcal{N}, \end{align} where $\partial\Omega_0^\mathcal{D}$ and $\partial\Omega_0^\mathcal{N}$ are the parts of the domain boundary $\partial\Omega_0$ with the prescribed displacement $\f{u}_\mathcal{D}$ and traction $\f{f}$; $\f{n}$ is the outer surface normal.
\section{Mesh deformation approach}
Assume that for the target domain $\Omega\subset\mathbb{R}^d$ only a parametrization $\pmb{\partial}\f{G}(\pmb{\xi}):\partial[0,1]^d\to\partial\Omega$ of its boundary is available. The problem of domain parametrization is to construct a parametrization $\f{G}(\pmb{\xi}):[0,1]^d\to\Omega$ such that $\f{G}|_{\partial[0,1]^d} = \pmb{\partial}\f{G}$. Moreover, in order to be suitable for numerical simulations, the parametrization $\f{G}$ has to be bijective, i.e., the condition \begin{equation} J(\f{G}) = \operatorname{det}\nabla_{\pmb{\xi}}\f{G} > 0 \end{equation} must hold.
Assume further that the boundary parametrization $\pmb{\partial}\f{G}$ is given in terms of four compatible B-spline curves (for $d=2$) or six compatible surfaces (for $d=3$). By compatible we mean that the oppositely lying parts of the boundary have the same B-spline basis. In this case, the tensor product basis $\{B_i(\pmb{\xi})\}$ of the unknown parametrization $\f{G}$ is defined, and $\f{G}$ has the structure \begin{equation}\label{eq:basis} \f{G}(\pmb{\xi}) = \sum_{i=1}^{n}\f{c}_iB_i(\pmb{\xi}), \end{equation} where $\{\f{c}_i\}_{i=1}^n$ are the control points. Since the boundary control points $\{\f{c}_i\}_\mathcal{B}$ follow from $\pmb{\partial}\f{G}$, the problem boils down to allocation of the unknown interior control points $\{\f{c}_i\}_\mathcal{I}$.
We apply the mesh deformation approach to solve the stated parametrization problem. The idea is to start by choosing a simple initial domain $\Omega_0$ with a known parametrization $\f{G}_0:[0,1]^d\to\Omega_0$. We assume that the parametrization $\f{G}_0$ uses the same tensor product basis as $\f{G}$ and thus has the following form: \begin{equation} \f{G}_0(\pmb{\xi}) = \sum_{i=1}^n\f{c}_i^0 B_i(\pmb{\xi}). \end{equation} Next, we search for a deformation $\pmb{\Phi}:\Omega_0\to\Omega$ such that \begin{equation}\label{eq:bdryfit} \pmb{\Phi}(\pmb{\partial}\f{G}_0(\pmb{\xi})) = \pmb{\partial\f{G}}(\pmb{\xi}) \text{ for } \forall\pmb{\xi}\in\partial[0,1]^d. \end{equation} The deformation $\pmb{\Phi}$ is characterized by an unknown displacement field $\f{u}:\Omega_0\to\mathbb{R}^d$. Following the isogeometric approach, we can introduce the discretization \begin{equation}\label{eq:uh} \f{u}_h(\f{x}) = \sum_{i=1}^n\f{d}_i B_i(\f{G}_0^{-1}(\f{x})), \end{equation} where the boundary degrees of freedom $\{\f{d}_i\}_\mathcal{B}$ are given by (\ref{eq:bdryfit}) as \begin{equation}\label{eq:DBC2} \{\f{d}_i\}_\mathcal{B} =\{\f{c}_i-\f{c}_i^0\}_\mathcal{B}. \end{equation} Once the interior degrees of freedom $\{\f{d}_i\}_\mathcal{I}$ are found, the parametrization $\f{G}$ can be constructed as a composition of the parametrization $\f{G}_0$ and the deformation $\pmb{\Phi}$: \begin{equation}\label{eq:super} \f{G} = \pmb{\Phi}\circ\f{G}_0 \end{equation} or \begin{equation}\label{eq:G} \f{G}(\pmb{\xi}) = \f{G}_0(\pmb{\xi}) + \f{u}_h(\f{G}_0(\pmb{\xi}))) = \sum_{i=1}^{n}(\f{c}_i^0+\f{d}_i)B_i(\pmb{\xi}). \end{equation} Observe that if the initial parametrization $\f{G}_0$ and the deformation $\pmb{\Phi}$ are bijective, i.e., \begin{equation} J(\f{G}_0)>0\; \text{ and }\; J(\pmb{\Phi}) > 0 \end{equation} hold, then the resulting parametrization $\f{G}$ is bijective as well: \begin{equation}\label{eq:superJ} J(\f{G}) = \operatorname{det}\nabla_{\pmb{\xi}}\f{G} = \operatorname{det}\nabla_\f{x}\pmb{\Phi} \operatorname{det}\nabla_{\pmb{\xi}}\f{G}_0 = J(\pmb{\Phi})J(\f{G}_0) >0. \end{equation}
The choice of the initial domain $\Omega_0$ is discussed in Section 4. To compute the deformation $\pmb{\Phi}$, we use the equations of nonlinear elasticity introduced in Section 2. The initial domain $\Omega_0$ serves as a reference configuration, and the unknown displacement field $\f{u}$ is found as a solution to the following system of equations: \begin{align}\label{eq:nonlin} -\operatorname{div}(\f{F}\f{S})(\f{u}) &= \f{0} \text{ in } \Omega_0,\\ \f{u} &= \f{u}_\mathcal{D} \text{ on } \partial\Omega_0,\label{eq:DBC} \end{align} where $\f{u}_\mathcal{D}$ is the prescribed boundary displacement defined by the boundary degrees of freedom (\ref{eq:DBC2}): \begin{equation} \f{u}_\mathcal{D}(\f{x}) = \sum_{i\in\mathcal{B}}\f{d}_i B_i(\f{G}_0^{-1}(\f{x})). \end{equation}
As for the material parameters, the choice of Young's modulus $E$ does not affect the solution of the system (\ref{eq:nonlin}-\ref{eq:DBC}) since equation (\ref{eq:nonlin}) has a zero right-hand side and the Dirichlet boundary condition (\ref{eq:DBC}) is prescribed over the entire boundary of the domain $\Omega_0$. On the other hand, Poisson's ratio $\nu$ is of great importance since it determines the resistance of the material to volumetric changes. A material with high Poisson's ratio will resist self-penetration and will thus contribute to the preservation of bijectivity. When $\nu$ approaches 0.5, the material becomes nearly incompressible. In practice, we use values between 0.45 and 0.49 since values higher than 0.49 would lead to a numerically unstable system unless a special formulation for incompressible behavior is used. A truly incompressible material, though, does not allow for any volumetric changes and would require the domains $\Omega_0$ and $\Omega$ to be of the same volume $-$ a condition which is hard to satisfy in practice.
\section{Initial domain} The choice of the initial domain $\Omega_0$ is a rather empirical step which directly affects the quality of the resulting parametrization $\f{G}$ for the target domain $\Omega$. Ideally, $\Omega_0$ should be simple enough $-$ so that it is possible to parametrize it using the Coons patch approach or another explicit method $-$ and yet geometrically close enough to $\Omega$ $-$ so that the complexity of computing a bijective deformation $\pmb{\Phi}$ does not eclipse the complexity of the original parametrization problem for $\Omega$. A combination of these two requirements suggests that the boundary of $\Omega_0$ has to be a simplification of the target domain's boundary $\partial\Omega$. In what follows, we describe a basic simplification procedure which allows to generate a range of different initial domains.
\subsection{Boundary simplification} We propose a simplification technique which is based on a projection onto a coarse B-spline basis in the $L^2$-sense. The idea is that only geometrically simple shapes can lie in the span of such a basis. The boundary of the target domain can be simplified as a whole or in parts. Due to a tensor product structure of the parametrization $\f{G}$, it is convenient to simplify each side of the domain separately; one should only make sure that the simplified sides fit together at the interfaces. After projection, the simplified boundary is re-expressed in terms of the original basis.
Let $\Gamma=\pmb{\partial}\f{G}|_\Pi$ be a parametrization of the part of the target domain boundary $\partial\Omega$ where $\Pi\subset\partial[0,1]^d$. Additionally, let $\mathcal{P}$ denote a set of indices corresponding to basis functions $B_i(\xi)$ (\ref{eq:basis}) which are not zero on $\Pi$. Then \begin{equation} \Gamma(\pmb{\xi}) = \sum_{i\in\mathcal{P}}\f{c}_iB_i(\pmb{\xi}). \end{equation}
In order to construct a simplification of $\Gamma$, we introduce a coarse basis $\{b_i(\pmb{\xi})\}_{i=1}^m$ where $m\ll|\mathcal{P}|$. By projecting $\Gamma$ in the $L^2$-sense onto the basis $\{b_i(\pmb{\xi})\}_{i=1}^m$, we acquire a primary simplification $\gamma$, \begin{equation}\label{eq:prsimple} \gamma(\xi) = \sum_{i=1}^{m}\f{x}_ib_i(\pmb{\xi}). \end{equation} The control points $\{\f{x}_i\}_{i=1}^m$ are found by solving the linear system \begin{align}\label{eq:L2EQ} \Big((b_i,b_j)_\Pi\Big)\Big(\f{x}_i^T\Big)&=\Big((b_i,\Gamma)_\Pi^T\Big),\\
\gamma|_{\partial\Pi} &= \Gamma,\label{eq:L2DBC} \end{align} where the inner product $(A,B)_\Pi$ is defined as $\int_{\Pi}A(\pmb{\xi})B(\pmb{\xi})d\pmb{\xi}.$ The boundary condition (\ref{eq:L2DBC}) ensures that the simplifications of different parts of $\partial\Omega$ fit together at $\partial\Pi$.
The primary simplification $\gamma$ can be re-expressed in terms of the original basis $\{B_i(\pmb{\xi})\}_\mathcal{P}$ in two ways: either by applying h- and p-refinement $-$ also known as knot insertion and degree elevation $-$ or by projecting $\gamma$ onto $\{B_i(\pmb{\xi})\}_{i\in\mathcal{P}}$ in a manner analogous to (\ref{eq:L2EQ}-\ref{eq:L2DBC}). The latter slightly changes the shape of $\gamma$ which is insignificant since we have freedom in choosing the initial domain $\Omega_0$. The result is a simplification $\Gamma_0$: \begin{equation} \Gamma_0(\pmb{\xi}) = \sum_{i\in\mathcal{P}}\f{c}_i^0B_i(\pmb{\xi}). \end{equation}
The actual shape of $\Gamma_0$ depends on the choice of the coarse basis $\{b_i(\pmb{\xi})\}_{i=1}^m$. Figure~\ref{fig:simplify1} shows an example where the coarsest B-spline bases of degree 1, 2, 3, and 4 are used for projection. Observe the rapid growth of complexity of the resulting simplified geometry as the polynomial degree increases. The same effect is observed with the growth of the number of basis functions $m$ with a fixed polynomial degree, see Fig.~\ref{fig:simplify2}. \begin{figure}
\caption{Dependence of the simplified boundary on the polynomial degree of the coarse basis. The coarsest B-spline bases of degree 1, 2, 3, and 4 are used.}
\label{fig:simplify1}
\end{figure} \begin{figure}
\caption{Dependence of the simplified boundary on the number of basis functions. Quadratic B-spline bases with 3, 4, 5, and 6 elements are used.}
\label{fig:simplify2}
\end{figure}
One of the advantages of the proposed simplification technique is a partial preservation of the parametrization speed which means that the images $\Gamma(\pmb{\xi})$ and $\Gamma_0(\pmb{\xi})$ of the same parametric point $\pmb{\xi}$ are close to each other. This reduces the prescribed boundary displacement in (\ref{eq:DBC}), makes it easier to compute the deformation $\pmb{\Phi}$, and thus increases the quality of the resulting parametrization $\f{G}$.
\subsection{Coons patch} The procedure described above is applied to the entire boundary $\partial\Omega$. The result is the parametrization of the initial domain's boundary $\pmb{\partial}\f{G}_0:\partial[0,1]^d\to\partial\Omega_0$: \begin{equation} \pmb{\partial}\f{G}_0(\pmb{\xi}) = \sum_{i\in\mathcal{B}}\f{c}_i^0B(\pmb{\xi}). \end{equation} Our intention is to parametrize it using the Coons patch approach. In a two-dimensional case, the Coons patch defines $\f{G}_0$ as a bilinear blending of four parametric curves $\pmb{\partial}\f{G}_0(0,\xi_2)$, $\pmb{\partial}\f{G}_0(1,\xi_2)$, $\pmb{\partial}\f{G}_0(\xi_1,0)$ and $\pmb{\partial}\f{G}_0(\xi_1,1)$, \begin{align} \f{G}_0(\xi_1,\xi_2) = &(1-\xi_1)\pmb{\partial}\f{G}_0(0,\xi_2) + \xi_1\pmb{\partial}\f{G}_0(1,\xi_2) \nonumber\\ \label{eq:coons}+&(1-\xi_2)\pmb{\partial}\f{G}_0(\xi_1,0) + \xi_2\pmb{\partial}\f{G}_0(\xi_1,1) \\ -&\begin{bmatrix} 1-\xi_1 & \xi_1 \end{bmatrix} \begin{bmatrix} \pmb{\partial}\f{G}_0(0,0) & \pmb{\partial}\f{G}_0(0,1) \\ \pmb{\partial}\f{G}_0(1,0) & \pmb{\partial}\f{G}_0(1,1) \end{bmatrix} \begin{bmatrix} 1-\xi_2 \\ \xi_2 \end{bmatrix},\nonumber \end{align} where $\xi_1$ and $\xi_2$ are parametric coordinates. The provided definition can be straightforwardly generalized to a three-dimensional case.
\begin{figure}
\caption{Coons patch applied to different simplifications of the puzzle piece geometry. The coarsest B-spline bases are of degree 1, 3, and 4 are used for simplification.}
\label{fig:coons}
\end{figure} Note that nothing guarantees that the resulting parametrization $\f{G}_0$ is bijective. We assume, however, that the boundary of the initial domain $\Omega_0$ acquired at the simplification step is simple enough for the Coons patch to succeed. This assumption puts a restriction on how fine the coarse basis (\ref{eq:prsimple}) can be. If the coarsest linear basis is used $-$ which is equivalent to substituting the original domain by a quad spanned on its corners $-$, then the Coons patch approach always produces a uniform bijective parametrization if the quad is convex. On the other hand, the quad-simplification is often too simple. There may exist a different initial domain which can also be parametrized with the Coons patch but which is geometrically closer to the target domain, see Fig.~\ref{fig:coons}. In the case of the depicted puzzle piece example, an optimal simplification is acquired by using the coarsest cubic basis.
\section{Deformation} \subsection{Incremental Newton's method} The nonlinear system (\ref{eq:nonlin}-\ref{eq:DBC}) is usually solved using Newton's method. However, if the prescribed boundary displacement $\f{u}_\mathcal{D}$ (\ref{eq:DBC}) is large, it can be difficult to find a bijective initial guess satisfying (\ref{eq:DBC}) from where Newton's method could converge to the system's solution $\f{u}$. In this case, the incremental loading can be applied, i.e., the problem (\ref{eq:nonlin}-\ref{eq:DBC}) is replaced by a sequence of problems for each loading step $i=1,\dots,N$: \begin{align}\label{eq:nonlininc} -\operatorname{div}(\f{F}\f{S})(\f{u}^i) &= \f{0} \text{ in } \Omega_0,\\ \f{u}^i &= \frac{i}{N}\f{u}_\mathcal{D} \text{ on } \partial\Omega_0.\label{eq:DBCinc} \end{align} Each incremental displacement $\f{u}^i$ provides an initial guess for Newton's method at the next loading step.
To formulate the algorithm, we need a weak form of equations (\ref{eq:nonlininc}-\ref{eq:DBCinc}). Let $\mathcal{V}_i = \{\f{v}\in H^1(\Omega_0)^d \;|\; \f{v} = \frac{i}{N}\f{u}_\mathcal{D} \text{ on }\partial\Omega_0\}$ be a set of trial solution spaces for different loading steps $i=1,\dots,N$ and let the weighting function space $\mathcal{V}_0$ be defined as $\{\f{v}\in H^1(\Omega_0)^d \;|\; \f{v} = \f{0}\text{ on }\partial\Omega_0\}$. Then the weak form of equations (\ref{eq:nonlininc}-\ref{eq:DBCinc}) is \begin{align}\nonumber &Find\text{ } \f{u}^i\in\mathcal{V}_i \text{ }such\text{ } that\\ &P(\f{u}^i,\f{v}) = \int\displaylimits_{\Omega_0}\f{S}(\f{u}^i):\delta\f{E}(\f{u}^i)[\f{v}]d\f{x} = 0, \hspace{0.2cm} \forall \f{v}\in\mathcal{V}_0,\label{eq:weak} \end{align} where $\delta\f{E}(\f{u}^*)[\f{v}] = \frac{1}{2}\big(\f{F}(\f{u}^*)^T\nabla_\f{x}\f{v} + \nabla_\f{x}\f{v}^T\f{F}(\f{u}^*)\big)$ is the variation of the Green-Lagrange strain tensor. Equations (\ref{eq:weak}) are nonlinear; in order to apply Newton's method they have to be linearized. The Taylor expansion at $P(\f{u}^*,\f{v})$ with the displacement increment $\Delta\f{u}$ yields \begin{equation}
P(\f{u}^*+\Delta\f{u},\f{v}) = P(\f{u}^*,\f{v}) + DP(\f{u}^*,\f{v})\cdot\Delta\f{u} + o(||\Delta\f{u}||), \end{equation} where the directional derivative $DP(\f{u}^*,\f{v})\cdot\Delta\f{u}$ is given by \begin{equation}\label{eq:dir} DP(\f{u}^*,\f{v})\cdot\Delta\f{u} = \int\displaylimits_{\Omega_0}\Big(\nabla_\f{x}\Delta\f{u}\,\f{S}(\f{u}^*):\nabla_\f{x}\f{v} +\mathbb{C}(\f{u}^*)\delta\f{E}(\f{u}^*)[\Delta\f{u}]:\delta\f{E}(\f{u}^*)[\f{v}]\Big)d\f{x}. \end{equation} Here $\mathbb{C}=2\frac{d\f{S}}{d\f{C}}$ is the forth order elasticity tensor whose components, in case of the neo-Hookean material law, are given by \begin{equation} \mathbb{C}_{abcd} = \lambda\f{C}_{ab}^{-1}\f{C}_{cd}^{-1}+(\mu-\lambda\ln J)\big(\f{C}_{ac}^{-1}\f{C}_{bd}^{-1}+\f{C}_{ad}^{-1}\f{C}_{bc}^{-1}\big). \end{equation}
Having defined all the necessary tools, we can now formulate incremental Newton's method. Let $\f{u}_s^i$ be a displacement field at the $s$-th iteration of Newton's method at the $i$-th loading step. The method involves two operation types. Type-A is an update $\f{u}_s^i\in\mathcal{V}_i\to\f{u}_{s+1}^i\in\mathcal{V}_i$ within the $i$-th loading step. An increment $\Delta\f{u}_{s}^i\in\mathcal{V}_0$ such that $\f{u}_{s+1}^i=\f{u}_{s}^i+\Delta\f{u}_{s}^i$ is found as the solution to the following weak problem: \begin{align}\nonumber &Find\text{ }\Delta\f{u}_{s}^i\in\mathcal{V}_0\text{ }such\text{ }that\\ \label{eq:incweak2}&DP(\f{u}_{s}^i,\f{v})\cdot\Delta\f{u}_{s}^{i} = - P(\f{u}_{s}^i,\f{v}), \hspace{0.2cm} \forall \f{v}\in\mathcal{V}_0. \end{align} We repeat this step untill the convergence criterion \begin{equation}
\frac{||\Delta\f{u}_s^i||_{L^2}}{||\f{u}_s^i||_{L^2}} < \varepsilon \end{equation} is met. The last approximate solution defines the incremental displacement $\f{u}^i$ at the $i$-th loading step.
Type-B is an update $\f{u}^{i-1}\in\mathcal{V}_{i-1}\to\f{u}_{1}^{i}\in\mathcal{V}_{i}$ between loading steps. We search for an increment $\Delta\f{u}^{i}\in\mathcal{V}_1$ such that $\f{u}_{1}^{i}=\f{u}^{i-1}+\Delta\f{u}^{i}$ as a solution to the weak problem \begin{align}\nonumber &Find\text{ }\Delta\f{u}^{i}\in\mathcal{V}_1\text{ }such\text{ }that\\ \label{eq:incweak4}&DP(\f{u}^{i-1},\f{v})\cdot\Delta\f{u}^{i} = - P(\f{u}^{i-1},\f{v}), \hspace{0.2cm} \forall \f{v}\in\mathcal{V}_0. \end{align} We say that the increment $\Delta\f{u}_i$ has stepsize $h_i = 1/N$ meaning that $\Delta\f{u}_i$ advances the displacement at the boundary $\partial\Omega_0$ by $1/N$-th of $\f{u}_\mathcal{D}$ (\ref{eq:DBC}).
The method is initialized with an initial displacement $\f{u}^0 = \f{0}$. The incremental displacement $\f{u}^N$ at the $N$-th loading step is accepted as the approximate solution $\f{u}$ to the original system (\ref{eq:nonlin}-\ref{eq:DBC}).
\subsection{Bijectivity and adaptivity} Although the material law (\ref{eq:neoHook}) guarantees that the solution $\f{u}$ to the system (\ref{eq:nonlin}-\ref{eq:DBC}) is bijective, special care is required to achieve this property when solving the system using Newton's method. Note that the directional derivative $DP(\f{u},\f{v})\cdot\Delta\f{u}$ (\ref{eq:dir}) can only be evaluated at a bijective displacement $\f{u}^*$. However, both type-A (\ref{eq:incweak2}) and type-B (\ref{eq:incweak4}) updates can produce an increment $\Delta\f{u}$ such that $\f{u}^*+\Delta\f{u}$ is not bijective. The problem can be overcome by adaptively scaling the increment $\Delta\f{u}$. If $J(\f{u}^*) > 0$, there always exists a scaling coefficient $t\in[0,1]$ such that \begin{equation}\label{eq:scaling} J(\f{u}^*+t\Delta\f{u}) > 0. \end{equation} In practice, we determine the scaling coefficient $t$ by consecutively testing values $t^k = 1/2^k$ until (\ref{eq:scaling}) is satisfied.
The implementation of adaptivity differs slightly for the type-A and type-B updates. For a type-A update $\f{u}_{s+1}^i = \f{u}_s^i+\Delta\f{u}_{s+1}^i$ $-$ where the increment $\Delta\f{u}_s^i$ is determined solely by (\ref{eq:incweak2}) $-$ the scaled increment $t\Delta\f{u}_s^i$ simply redefines $\f{u}_{s+1}^i$ as $\f{u}_s^i + t\Delta\f{u}_s^i$, and the method proceeds to the next iteration of Newton's method.
For a type-B update $\f{u}_1^i = \f{u}^{i-1}+\Delta\f{u}^i$, the stepsize $h_i$ of the increment $\Delta\f{u}^i$ is predefined by the number of loading steps $N$. Scaling the increment $\Delta\f{u}^i$ changes the stepsize to $t\cdot h_i$ and $-$ since all updates of the boundary displacement have to add up to $\f{u}_\mathcal{D}$ $-$ requires changing the stepsizes of the subsequent type-B updates. One way to do it is to proceed with the $1/N$ stepsize, scaling it if necessary to fulfill (\ref{eq:scaling}). The final stepsize \begin{equation} h_{N^*} = 1- \sum_{i=1}^{N^*-1}h_i, \end{equation} makes sure that all stepsizes add up to 1.
Another possibility is to apply the so-called greedy stepsize strategy where incremental Newton's method begins with a stepsize of the first type-B update $h_1=1$. If the resulting displacement $\f{u}_1$ is not bijective, $h_1$ is iteratively halved until (\ref{eq:scaling}) is satisfied. The method proceeds with the stepsize \begin{equation} h_i = 1 - \sum_{j=1}^{i-1}h_j \end{equation} for loading steps $i\geqslant2$ which is also iteratively halved if necessary. We have to report that the greedy stepsize often results in stalling of the adaptive algorithm, i.e., the iterative halving produces too small stepsizes. The effect occurs much less often with the first adaptive strategy. This behavior deserves further investigation.
A nonadaptive solution to preserve bijectivity during type-B updates is to increase the number of loading steps $N$ and to restart the method.
Lastly, we remark on ways to test the bijectivity condition (\ref{eq:J}). A solution which takes into account the B-spline nature of the discretization $\f{u}_h$ (\ref{eq:uh}) is to express the Jacobian determinant $J(\f{u}_h)$ as a B-spline function \cite{Gravesen2014}. If all control coefficients in a B-spline expansion of $J(\f{u}_h)$ are positive, then the displacement $\f{u}_h$ is bijective. Unfortunately, this condition is only a sufficient but not a necessary one; this may lead to a lot of false detections of bijectivity violation. In practice, we resort to a much less elegant solution of sampling the Jacobian determinant at the Gaussian quadrature points associated with the discretization $\f{u}_h$. \subsection{Diagonal incremental loading} Notice that incremental Newton's method is computationally expensive. If $S$ is the average number of iterations which Newton's method takes to converge at each loading step, then the method requires $O(NS)$ iterations to compute $\f{u}$. This is justified for applications where the deformation history is important; in our case, however, only the final displacement field $\f{u}^N$ is of interest.
In what follows, we propose a variation of incremental Newton's method which requires only $O(N+S)$ iterations. It begins with $N$ type-B updates $\f{u}_{inc}^{i-1}\in\mathcal{V}_{i-1}\to \f{u}_{inc}^{i}\in\mathcal{V}_{i}$ between loading steps. Similar to (\ref{eq:incweak4}), an increment $\Delta\f{u}_{inc}^{i}\in\mathcal{V}_1$ such that $\f{u}_{inc}^{i}=\f{u}_{inc}^{i-1}+\Delta\f{u}_{inc}^{i}$ is found as a solution to the weak problem \begin{align}\nonumber &Find\text{ }\Delta\f{u}_{inc}^{i}\in\mathcal{V}_1\text{ }such\text{ }that\\ \label{eq:incweak6}&DP(\f{u}_{inc}^{i-1},\f{v})\cdot\Delta\f{u}_{inc}^{i} = - P(\f{u}_{inc}^{i-1},\f{v}), \hspace{0.2cm} \forall \f{v}\in\mathcal{V}_0. \end{align} Once again, the method is initialized with $\f{u}_{inc}^0=\f{0}$. After $N$ steps, the displacement $\f{u}_{inc}^N\in\mathcal{V}_N$ is acquired; the described above adaptive algorithms can be applied. From here, the method proceeds with type-A iterations (\ref{eq:incweak2}) till it converges to $\f{u}^N=\f{u}$. \begin{figure}
\caption{Incremental Newton's method (blue) and the proposed variation with diagonal incremental loading (red).}
\label{fig:newton}
\end{figure}
The difference between the incremental Newton's method and the proposed variation is illustrated schematically in Figure~\ref{fig:newton}. We refer to the first phase of the algorithm as the diagonal incremental loading since the updates $\f{u}_{inc}^{i-1}\to \f{u}_{inc}^{i}$ advance the solution through both the iterations of Newton's method and the loading steps. In fact, our numerical experiments suggest that $\f{u}_{inc}^N$ converges quadratically to $\f{u}$ as $N\to\infty$. Because of that, $\f{u}_{inc}^N$ can be used as a stand-alone approximate solution to the system (\ref{eq:nonlin}-\ref{eq:DBC}) if the number of loading steps $N$ is big enough. In this case, the use of an adaptive loading stepsize is unnecessary.
\subsection{Diagonal incremental loading with linear elasticity}
The incremental displacements $\f{u}^i$ define a sequence of intermediate domains $\Omega_i = \{\f{x}+\f{u}^i(\f{x})\; | \;\f{x}\in\Omega_0\}$, $i=1,\dots,N$. If the number of loading steps $N$ is big enough, one could try to construct a similar sequence $\Omega_i^{lin} = \{\f{x}+\Delta\f{u}^i_{lin}(\f{x})\; | \;\f{x}\in\Omega_{i-1}^{lin}\}$, $\Omega_0^{lin}=\Omega_0$ recursively, where at the $i$-th step a displacement increment $\Delta\f{u}_{lin}^i$ is found as a solution to the system of linear elasticity equations \begin{align}\label{eq:linel} -\operatorname{div}\pmb{\sigma}(\Delta\f{u}^i_{lin}) &= \f{0}\text{ in } \Omega_{i-1}^{lin},\\ \label{eq:linelDBC}\Delta\f{u}^i_{lin} &= \frac{\f{u}_\mathcal{D}}{N}\text{ on }\partial\Omega_{i-1}^{lin}. \end{align} Here $\pmb{\sigma}$ is the Cauchy stress tensor which is related to the linear strain tensor $\pmb{\varepsilon}(\f{u}^*)=\frac{1}{2}\big(\nabla_\f{x}\f{u}^*+(\nabla_\f{x}\f{u}^*)^T\big)$ via Hooke's law: \begin{equation} \pmb{\sigma}(\f{u}^*) = \lambda\operatorname{tr}(\pmb{\varepsilon}(\f{u}^*))\f{I} + 2\mu\pmb{\varepsilon}(\f{u}^*). \end{equation}
Note that, unlike (\ref{eq:nonlininc}-\ref{eq:DBCinc}), the equations (\ref{eq:linel}-\ref{eq:linelDBC}) are formulated in the intermediate configurations $\Omega_i^{lin}$, not in $\Omega_0$. For each $\Omega_i^{lin}$ we define the trial solution space $\mathcal{V}_i^{lin} = \{\f{v}\in H^1(\Omega_{i-1}^{lin})^d \;|\; \f{v} = \frac{\f{u}_\mathcal{D}}{N}\text{ on }\partial\Omega_{i-1}\}$ and the weighting function space $\mathcal{V}_{i,0}^{lin} = \{\f{v}\in H^1(\Omega_{i-1}^{lin})^d \;|\; \f{v} = \f{0} \text{ on }\partial\Omega_{i-1}\}$. Then the weak form of equations (\ref{eq:linel}-\ref{eq:linelDBC}) is \begin{align}\nonumber &Find\text{ }\Delta\f{u}^i_{lin}\in\mathcal{V}_i^{lin}\text{ }such\text{ }that\\ \label{eq:incweak8}&L(\Delta\f{u}^i_{lin},\f{v})=\int\displaylimits_{\Omega_{i-1}}\pmb{\sigma}(\Delta\f{u}^i_{lin}):\pmb{\varepsilon}(\f{v})d\f{x} = 0, \hspace{0.2cm} \forall \f{v}\in\mathcal{V}_{i,0}^{lin}. \end{align}
It is important to notice that the weak problem (\ref{eq:incweak8}) is not equivalent to (\ref{eq:incweak6}). In fact, the bilinear form $L(\Delta\f{u},\f{v})$ in (\ref{eq:incweak8}) is the result of evaluating the directional derivative $DP(\f{u}^*,\f{v})\cdot\Delta\f{u}$ (\ref{eq:dir}) at $\f{u}^*=\f{0}$. Thus, the described procedure $-$ which we refer to as the linear diagonal incremental loading as opposed to the described above (nonlinear) diagonal incremental loading $-$ is similar to modified Newton's method in \cite{wriggers2008nonlinear} where the derivative evaluated at the first iteration is used to compute updates at all consecutive iterations.
We define the linear incremental displacements $\f{u}_{lin}^i$ as a sum of the preceding increments $\Delta\f{u}_{lin}^i$: \begin{equation} \f{u}_{lin}^i=\sum_{j=1}^i\Delta\f{u}_{lin}^j. \end{equation} In our experience, as the number of loading steps $N$ grows, $\f{u}_{lin}^N$ converges linearly to a displacement $\f{u}_{lin}$ which, although not equal, is close to the solution $\f{u}$ of the system (\ref{eq:nonlin}-\ref{eq:DBC}). Even more importantly, as we demonstrate in Section 6, the limit displacement $\f{u}_{lin}$ seems to be bijective; the described above adaptive algorithms can be applied to ensure bijectivity for small $N$.
Much like diagonal incremental loading with nonlinear elasticity, the described procedure can be used to provide an initial guess for Newton's method at the final loading step. Alternatively, $\f{u}_{lin}^N$ can also serve as a stand-alone displacement field defining the deformation $\pmb{\Phi}:\Omega_0\to\Omega$. This may be an interesting option since only a linear elasticity solver is required to implement it. The linear diagonal incremental loading is also extensively used in ALE algorithms to deform the computational mesh for fluid domains in FSI problems \cite{stein2003mesh}.
\section{Examples} \subsection{2D single-patch domains} First, we consider two two-dimensional, single-patch examples. We demonstrate the performance of the mesh deformation approach and show its dependence on the initial domain and on the value of Poisson's ratio used in the material law. As a rule, we use Newton's method with the Nonlinear Diagonal Incremental Loading (N-DIL). However, we also apply the Linear Diagonal Incremental Loading (L-DIL) as a stand-alone deformation method and compare the results. Finally, we compare the output of the mesh deformation approach with the results of the elliptic grid generation technique \cite{hinz2018elliptic} and the constrained optimization approach based on the area-orthogonality quality measure \cite{Gravesen2014}. When comparing different parametrizations, we mainly use the minimum of the Jacobian determinant \begin{equation}\label{eq:minJ} m(\f{G}) = \displaystyle\min_{\pmb{\xi}\in[0,1]^d}J(\f{G}(\pmb{\xi})) \end{equation} as the most neutral quality measure which does not favor any parametrization quality but its bijectivity. The higher the value of of $m(\f{G})$, the better. Secondary, we use the global ratio of the Jacobian determinant \begin{equation}\label{eq:ratioJ} R(\f{G}) = \frac{\displaystyle\max_{\pmb{\xi}\in[0,1]^d}J(\f{G}(\pmb{\xi}))}{\displaystyle\min_{\pmb{\xi}\in[0,1]^d}J(\f{G}(\pmb{\xi}))} \end{equation} as a measure of uniformity. The closer it is to 1, the better.
The mesh deformation approach is implemented using G+Smo \cite{jlmmz2014} - an open source C++ library providing necessary IGA routines. The area-orthogonality optimization is based on the nonlinear optimization library IPOPT. An in-house Newton-Krylov solver written in Python is used to implement the elliptic grid generation technique.
\subsubsection{2D male rotor} As the first example, we study the profile of a screw compressor's male rotor \cite{shamanskiy2018screw}. Its boundary is given as four cubic B-spline curves, and the domain is fairly simple so all considered parametrization techniques can be expected to perform well. We would like to notice, however, that the Coons patch does not produce a bijective parametrization when applied to this geometry.
Figure~\ref{fig:male} depicts the results of computing the deformation using Newton's method with N-DIL. Two initial domains were generated by applying $L^2$-simplification to each part of the boundary with the coarsest B-Spline bases of degree $p=1$ and $p=3$. A value of 0.49 was used for Poisson's ratio. Such a high value required us to use at least $N=5$ loading steps for the initial domain with $p=1$ and $N=3$ for the initial domain with $p=3$. Both initial domains led to high-quality bijective parametrizations. With respect to the quality measures $m(\f{G})$ and $R(\f{G})$, the $p=1$ parametrization is better. It inherited its uniform structure from the initial domain due to the high value of Poisson's ratio. We use it a as baseline for the following comparison.
\begin{figure}
\caption{Male rotor example. Initial domains ($p = 1$ and $p=3$) and results of deformation by Newton's method with N-DIL and Poissson's ratio of 0.49.}
\label{fig:male}
\end{figure}
The result of the mesh deformation approach depends heavily on the choice of Poisson's ratio. To illustrate it, we applied Newton's method with N-DIL to deform the $p=1$ initial domain with Poisson's ratio of 0, see Fig.~\ref{fig:malePoiss}. The resulting parametrization is still bijective but is worse than the baseline with respect to $m(\f{G})$ and $R(\f{G})$. However, a lower value of Poisson's ratio made it easier to compute a bijective displacement field; only $N=1$ loading step was necessary.
Additionally, we show the performance of the L-DIL method as a stand-alone deformation technique. Figure~\ref{fig:maleLin} presents the results of applying it to deform the $p=1$ initial domain with Poisson's ratio of 0.49. After $N=10$ loading steps, the resulting parametrization is virtually indiscernible from the baseline. In order to achieve bijectvity, at least $N=3$ loading steps had to be used.
\begin{figure}
\caption{Male rotor example. Initial domain $p=1$ deformed by Newton's method with N-DIL and Poisson's ratio of 0 (left). Comparison of the blue corresponding mesh with the red baseline mesh (right).}
\label{fig:malePoiss}
\end{figure}
\begin{figure}
\caption{Male rotor example. Initial domain $p=1$ deformed by the L-DIL method with $N=10$ loading steps and Poisson's ratio of 0.49 (left). Comparison of the blue corresponding mesh with the red baseline mesh (right).}
\label{fig:maleLin}
\end{figure}
Finally, we applied elliptic grid generation and area-orthogonality optimization to enrich the comparison, see Fig.~\ref{fig:maleOther}. Both techniques produce high-quality bijective parametrizations; however, the baseline is better with respect to $m(\f{G})$ and $R(\f{G})$.
\begin{figure}
\caption{Male rotor example. Parametrizations by elliptic grid generation (left) and area-orthogonality optimization (right).}
\label{fig:maleOther}
\end{figure}
\subsubsection{2D puzzle piece} Next, we consider a puzzle piece example. Its boundary possesses distinct protruding and concave regions which make it difficult to construct a bijective tensor-product parametrization.
\begin{figure}
\caption{Puzzle piece example. Initial domains ($p = 1$ and $p=2$) and results of deformation by Newton's method with N-DIL and Poissson's ratio of 0.49.}
\label{fig:puzzle}
\end{figure}
We generated two different initial domains by applying the $L^2$-simplification to each part of the boundary with the coarsest B-spline bases of degree $p=1$ and $p=2$. Newton's method with N-DIL was applied with Poisson's ratio equal to 0.49. Such a high value, together with the complexity of the domain, made it necessary to use $N=13$ loading steps for the $p=1$ initial domain and $N=8$ for the $p=2$ initial domain. Figure~\ref{fig:puzzle} depicts the results of the deformation. Judging by $m(\f{G})$ and $R(\f{G})$, the $p=1$ initial domain results in a better parametrization. However, the middle neck-like region of the domain underwent a large deformation which resulted in the isoparametric lines being pushed away to the sides. On the other hand, the $p=2$ initial domain is geometrically much closer to the target domain so it had to be deformed less. This results in a visually more natural parametrization which we use as a baseline for the following comparison.
\begin{figure}
\caption{Puzzle piece example. Initial domain $p=2$ deformed by Newton's method with N-DIL and Poisson's ratio of 0 (left). Comparison of the blue corresponding mesh with the red baseline bash (right).}
\label{fig:puzzlePoiss}
\end{figure}
Due to the complexity of the domain, it is crucial to use a high value of Poisson's ratio to preserve bijectivity. Figure~\ref{fig:puzzlePoiss} demonstrates the results of deforming the $p=2$ initial domain by Newton's method with N-DIL and Poisson's ratio equal to 0. The isoparametric lines come together densely next to the concave parts of the boundary, and $m(\f{G})$ drops almost by one order of magnitude in comparison to the baseline making the parametrization almost not bijective.
\begin{figure}
\caption{Puzzle piece example. Initial domain $p=2$ deformed by the L-DIL method with $N=10$ loading steps and Poisson's ratio of 0.49 (left). Comparison of the blue corresponding mesh with the red baseline mesh (right).}
\label{fig:puzzleLin}
\end{figure}
Additionally, we demonstrate the performance of the L-DIL method as a stand-alone parametrization technique in Figure~\ref{fig:puzzleLin}. The $p=2$ initial domain was deformed with Poisson's ratio of 0.49 and $N=15$ loading steps. Unlike in the male rotor example, the resulting parametrization is quite different from the baseline. Still, it is bijective and has the same value of $m(\f{G})$; however, the baseline is more uniform. At least $N=8$ loading step are required to achieve bijectivity.
Finally, we applied elliptic grid generation and area-orthogonality optimization to the puzzle piece example. The former provides a barely bijective, highly non-uniform parametrization. The latter provides a high-quality parametrization, only slightly worse with respect to $m(\f{G})$ than the baseline.
\begin{figure}
\caption{Puzzle piece example. Parametrizations by elliptic grid generation (left) and area-orthogonality optimization (right).}
\label{fig:puzzleOther}
\end{figure}
\subsubsection*{Remark on numerical effort} Here we briefly describe our experience with respect to the numerical cost of the applied parametrization approaches. Unfortunately, since they are implemented in different programming languages, a fair comparison of CPU time necessary for every method to produce a bijective parametrization is not possible. We can, however, get an impression of their numerical cost by looking at the number of iterations taken by each method. In our experience, elliptic grid generation is the fastest method which takes only 3-6 iterations to converge. When applying the mesh deformation approach, 3-10 loading steps are required to produce a bijective initial guess by the diagonal incremental loading. The result can be used as a final parametrization, or additional 4-7 iterations of Newton's method are necessary to acquire a solution to the system (\ref{eq:nonlin}-\ref{eq:DBC}). Together, this results in 7-17 iterations. Finally, the optimization technique takes 40-70 iterations which makes it the most computationally expensive.
\subsubsection*{Convergence of diagonal incremental loading} We conclude the analysis of different aspects of the mesh deformation approach by studying the convergence of the nonlinear and linear diagonal incremental loading approaches. As we mention in Section 5, the result of N-DIL $\f{u}^N_{inc}$ converges quadratically to the solution of the system (\ref{eq:nonlin}-\ref{eq:DBC}) $\f{u}$ as the number of loading steps $N$ grows. At the same time, the result of L-DIL $\f{u}_{lin}^N$ converges linearly to a different displacement $\f{u}_{lin}$ which is quiet close to $\f{u}$ and, surprisingly, bijective. Figure~\ref{fig:conv} presents a convergence plot where the relative errors \begin{equation}
err_{N-DIL} = \frac{||\f{u}-\f{u}^N_{inc}||_{L^2}}{||\f{u}||_{L^2}} \; \text{ and }\; err_{L-DIL} = \frac{||\f{u}_{lin}-\f{u}^N_{lin}||_{L^2}}{||\f{u}_{lin}||_{L^2}} \end{equation} are plotted against $N$ for the both examples.
\begin{figure}
\caption{Convergence of the nonlinear and linear diagonal incremental loading algorithms for the male rotor and the puzzle piece examples. }
\label{fig:conv}
\end{figure}
\subsection{2D multi-patch female rotor} Here we show that the mesh deformation approach is applicable to multi-patch problems as well. Consider the female rotor example depicted in Figure~\ref{fig:female}. The initial domain consists of 8 patches connected in a $C^0$-fashion. Each patch is formed by linear interpolation between its corner points, which corresponds to applying the $L^2$-simplification with the coarsest basis of degree 1. The mesh deformation is conducted by Newton's method with N-DIL with $N=5$ loading steps and Poisson's ratio of $0.48$. It is interesting to observe the way $C^0$-interfaces between the patches deform in an attempt to assume a more natural shape, see Fig.~\ref{fig:femalecomp}. \begin{figure}
\caption{Mesh deformation approach for the female rotor. Initial domain (left) and resulting parametrization (right).}
\label{fig:female}
\end{figure}
\begin{figure}
\caption{Multi-patch structures of the initial (left) and the deformed (right) domains.}
\label{fig:femalecomp}
\end{figure}
\subsection{3D puzzle piece} Finally, we demonstrate that the mesh deformation approach is fully capable of dealing with 3D domains. Figures~\ref{fig:puzzle3d} and \ref{fig:puzzle3dcross} depict the result of applying it to a 3D puzzle piece example. The puzzle surface is simplified by the $L^2$-projection using the coarsest quadratic basis. The resulting initial domain is deformed using Newton's method with N-DIL with $N=10$ loading steps and Poisson's ratio equal to $0.46$.
\begin{figure}
\caption{Mesh deformation approach for the 3D puzzle piece. Initial domain (left) and resulting parametrization (right).}
\label{fig:puzzle3d}
\end{figure}
\begin{figure}
\caption{Cross-section of the 3D puzzle piece.}
\label{fig:puzzle3dcross}
\end{figure}
\section{Conclusion} In this paper, we investigated the mesh deformation approach to the problem of domain parametrization and used it to construct tensor product B-spline parametrizations of high quality. We proposed a general technique to generate initial domains which can be applied to a wide range of examples. Furthermore, we described several efficient algorithms for computing an approximate solution to arising equations of nonlinear elasticity tuned specifically for this application. We demonstrated the performance of the mesh deformation approach on two 2D examples and compared it to the elliptic grid generation and area-orthogonality based optimization techniques. While being relatively computationally inexpensive, the proposed approach successfully produced bijective parametrizations which are superior with respect to uniformity of the corresponding mesh. Additionally, we showed that the mesh deformation approach is not restricted to a 2D single-patch case but can be applied to 3D and multi-patch problems.
Further research directions include development of an automatic procedure for the choice of an optimal initial domain. Moreover, the proposed approach may benefit from the nonhomogeneous distribution of material parameters in the elasticity model; potentially, a specialized material law can be developed. Lastly, a use of a nonzero right-hand side in the equations of nonlinear elasticity may offer more room for improvement.
\end{document} | arXiv |
A rare CACNA1H variant associated with amyotrophic lateral sclerosis causes complete loss of Cav3.2 T-type channel activity
Robin N. Stringer1,2,
Bohumila Jurkovicova-Tarabova3,
Sun Huang4,
Omid Haji-Ghassemi5,
Romane Idoux1,
Anna Liashenko1,
Ivana A. Souza4,
Yuriy Rzhepetskyy1,
Lubica Lacinova3,
Filip Van Petegem5,
Gerald W. Zamponi4,
Roger Pamphlett6 &
Norbert Weiss1
Molecular Brain volume 13, Article number: 33 (2020) Cite this article
Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disorder characterized by the progressive loss of cortical, brain stem and spinal motor neurons that leads to muscle weakness and death. A previous study implicated CACNA1H encoding for Cav3.2 calcium channels as a susceptibility gene in ALS. In the present study, two heterozygous CACNA1H variants were identified by whole genome sequencing in a small cohort of ALS patients. These variants were functionally characterized using patch clamp electrophysiology, biochemistry assays, and molecular modeling. A previously unreported c.454GTAC > G variant produced an inframe deletion of a highly conserved isoleucine residue in Cav3.2 (p.ΔI153) and caused a complete loss-of-function of the channel, with an additional dominant-negative effect on the wild-type channel when expressed in trans. In contrast, the c.3629C > T variant caused a missense substitution of a proline with a leucine (p.P1210L) and produced a comparatively mild alteration of Cav3.2 channel activity. The newly identified ΔI153 variant is the first to be reported to cause a complete loss of Cav3.2 channel function. These findings add to the notion that loss-of-function of Cav3.2 channels associated with rare CACNA1H variants may be risk factors in the complex etiology of ALS.
Amyotrophic lateral sclerosis (ALS), also known as motor neuron disease or Lou Gehrig's disease, is a heterogeneous neuromuscular disease characterized by the degeneration of cortical, brain stem and spinal motor neurons that leads to muscle weakness and paralysis. Disease onset averages between 40 and 70 years of age [1], and the annual incidence worldwide is estimated to be between one to three per 100,000 people [2]. ALS is best regarded as a complex genetic disorder with a Mendelian pattern of inheritance in approximately 5–10% of patients (familial ALS, fALS), but most patients have no discernable family history of the disease which is then referred to being "sporadic" or "isolated" in nature (sALS) [3]. However, the observation that established fALS genes are also implicated in sALS makes the distinction between fALS and sALS more abstruse [4]. For instance, mutations in the most common ALS genes (SOD1, FUS, TARDBP, C9orf72, VCP, and PFN1) account for up to 70% of fALS patients and about 10% of sALS [5]. In addition, several genes and loci in apparent sALS patients have been proposed to be associated with an increased risk of ALS, or to modify the onset or progression of the disease, which highlights the importance of genetic risk factors [6]. Among these genes, the most prominent are ATXN2 [7], UNC13A [8], ANG [9], and SMN1 [10]. Recently, whole exome sequence analysis of case-unaffected-parents trios identified two compound heterozygous recessive missense mutations in the gene CACNA1H [11, 12].
In the present study, we report two additional CACNA1H variants (c.3629C > T, p.P1210L and c.454GTAC > G, p.ΔI153) identified using whole genome sequencing of a cohort of 34 sALS patients, with sequencing undertaken at the Genome Institute, Washington University, St Louise USA. The method of whole genome analysis was the same as that reported in a separate study [11]. Whole genome analyses reveal no pathogenetic single nucleotide or structural differences between monozygotic twins discordant for amyotrophic lateral sclerosis [13]. No unaffected parent DNA was subjected to whole genome sequencing, so it was not possible to determine if the variants were recessive or de novo in nature [11]. Functional analysis of these two variants revealed a complete loss of Cav3.2 channel function associated with the ΔI153 variant and a dominant-negative effect of this variant on the wild-type channel when expressed in trans.
Whole genome sequencing identifies heterozygous CACNA1H mutations in ALS patients
In a previous study, using case-unaffected parents trio exome analyses, we reported an ALS patient with two heterozygous CACNA1H missense mutations causing a partial loss-of-function of Cav3.2 channel, suggesting that rare CACNA1H variants may represent a risk factor for ALS [11, 12]. In the present study, using whole genome sequencing of a small cohort of ALS patients, we identified two additional heterozygous variants in CACNA1H. The first variant (c.3629C > T, p.P1210L) was identified in a man with ALS onset aged 55 years who died aged 62 years. He had no family history of ALS, though his father had Alzheimer's disease and his mother bipolar disorder. The P1210L variant is located in a non-conserved region of the intracellular linker connecting transmembrane domains II and III (II-III loop) of Cav3.2 (Fig. 1a and b). This variant has previously been reported in 188 out of 240,876 individuals in the gnomAD database (https://gnomad.broadinstitute.org/), including 144 of 193,008 alleles only from individuals who were not ascertained for having a neurological condition in a neurological case/control study. Furthermore, in silico analysis predicted the amino acid change to be neutral (Fig. 1c), suggesting that this variant is likely to not have a major pathological role. The second variant (c.454GTAC > G, p.ΔI153) was identified in a man with ALS onset aged 53 years who died aged 54 years. Although he had no family history of ALS, his mother developed insulin-dependent diabetes mellitus and narcolepsy, and his father presented with early onset dementia, a condition known to precede motor impairment in some people with ALS [14]. This mutation produces an inframe deletion of the isoleucine 153 located in the second transmembrane helix of Cav3.2, a region highly conserved across Cav3.2 channel orthologs (Fig. 1a and b). The ΔI153 variant has only been reported in 1 out of 198,036 individuals in the gnomAD database and this deletion was predicted to be deleterious on the channel (Fig. 1c). Hexanucleotide repeat number in C9orf72, the most common genetic cause of ALS, was normal in both patients.
Location of ALS-associated Cav3.2 variants. a Schematic representation of the membrane topology of Cav3.2 depicting the position of the ΔI153 (red circle) and P1210L variants (blue circle). b Amino acid sequence alignment of Cav3.2 regions containing the two mutations across several species showing the conservation of the I153 residue. Alignments were performed using UniProt (Homo sapiens O95180; Rattus norvegicus Q9EQ60; Mus musculus O88427; Pan troglodytes H2QA94; Macaca mulatta A0A1D5R8A8; Felis catus M3WP54; Canis lupus F1PQE5; Ficedula albicollis U3KGY9; Xenopus Tropicalis F6U0H3; Alligator sinensis A0A3Q0GL31). c In silico prediction of the potential impact of the P1210L and ΔI153 mutations on the functioning of Cav3.2 channel
The ΔI153 mutation causes a complete loss of Cav3.2 function
In the first series of experiments we assessed the functional expression of Cav3.2 P1210L and ΔI153 channel variants expressed in tsA-201 cells by whole-cell patch clamp electrophysiology. Cells expressing the P1210L channel variant displayed a characteristic low-threshold voltage-activated T-type current (Fig. 2a and b) that only differed from cells expressing the wild-type (WT) channel by a 32% reduction (p = 0.0125, Mann-Whitney test) of the maximal conductance (Gmax) (from 571.3 ± 58.4 pS/pF, n = 42 to 387.7 ± 33.9 pS/pF, n = 41) (Fig. 2c). The main electrophysiological properties, including voltage-dependence of activation and inactivation (Fig. 2d), and recovery from inactivation (Fig. 2e), remained unaffected. In cells expressing the ΔI153 channel variant, we did not record any T-type conductance (Fig. 2a-c). It is noteworthy that experimental conditions known to favor the expression of misfolded proteins, such as treatment of cells with the proteasome inhibitor MG132 or decrease of cell incubation temperature to 30 °C, were used but failed to restore a T-type conductance. Additionally, co-expression of the ΔI153 channel variant with Stac1 or with a calnexin-derived peptide that has previously been reported to potentiate the expression of Cav3.2 in the plasma membrane [15, 16] also failed to restore T-type currents (data not shown). The lack of functional expression of the ΔI153 channel variant could have been inherent in our experimental conditions using recombinant channels, so we aimed to further assess the phenotypic effect of the ΔI153 mutation on native Cav3.2 channels in a neuronal environment. Therefore, we used a CRISPR/Cas9 approach to introduce the ΔI153 mutation in native Cav3.2 channels in cultured dorsal root ganglion (DRG) neurons. DRG neurons were used as a model system since these neurons are known to display a T-type conductance that is almost exclusively carried by Cav3.2 channel subtype [17]. Consistent with our observation with recombinant Cav3.2 channels, T-type currents recorded from Cav3.2 ΔI153 DRG neurons 3 days after gene editing were reduced by 73% (Mann-Whitney p < 0.0001) compared to wild type neurons (from 15.4 ± 2.5 pA/pF, n = 12 to 4.1 ± 0.8 pA/pF, n = 12) (Fig. 2f and g).
Electrophysiological characterization of Cav3.2 P1210L and ΔI153 variants. a Representative T-type current traces recorded in response to 150 ms depolarizing steps to values ranging between − 90 mV and + 30 mV from a holding potential of − 100 mV for wild-type (WT, black traces), P1210L (blue traces), and ΔI153 (red traces) channel variants expressed in tsA-201 cells. b Corresponding mean current-voltage relationship (I/V) for WT (black circles), P1210L (blue circles), and ΔI153 (red circles) channels. c Corresponding mean maximal macroscopic conductance (Gmax) obtained from the fit of the I/V curves with the modified Boltzmann eq. (1). d Mean normalized voltage-dependence of activation and inactivation for WT (black circles) and P1210L channels (blue circles). e Mean normalized recovery from inactivation kinetics. f Representative T-type current traces recorded from WT (black trace) and ΔI153 DRG neurons (red trace) 3 days after editing of Cacna1h by CRISPR/Cas9 in response to 80 ms depolarizing steps to − 25 mV from a holding potential of − 90 mV. g Corresponding mean peak T-type current density at − 25 mV in WT and ΔI153 mutant DRG neurons
Collectively, these data revealed a mild loss of channel function associated with the P1210L variant, and the deleterious effect of the ΔI153 mutation leading to a complete loss of Cav3.2 activity.
The ΔI153 mutation disrupts Cav3.2 biogenesis
The alteration of T-type currents in ALS-associated Cav3.2 variants could originate from an overall decreased expression of channel proteins, reduced channel density in the plasma membrane, altered gating of the channel, or from a combination of several of these. Therefore, we first assessed the expression levels of P1210L and ΔI153 channel variants in tsA-201 cells by western blot (Fig. 3a). Immunoblot analysis from total cell lysates showed that the P1210L channel variant was present at a similar level as the WT channel (Fig. 3b). In contrast, the expression level of the ΔI153 channel variant was reduced by 78% (Mann-Whitney p = 0.0286), suggesting that this variant may undergo extensive degradation (Fig. 3b). Next, we aimed to assess the expression of Cav3.2 channel variants at the cell surface. Therefore, we analyzed charge movements (Q) that refer to the movement of the channel voltage-sensor in the plasma membrane in response to electrical membrane depolarizations. Total charges (Qmax) were assessed at the reversal potential of the ionic current, where we can consider Qrev to be equal to Qmax, providing an accurate assessment of the total number of channels in the plasma membrane (Fig. 3c). In cells expressing the P1210L variant, we observed a 27% reduction of Qmax (t-test p = 0.0467) compared to cells expressing the WT channel (from 12.0 ± 1.3 fC/pF, n = 19 to 8.7 ± 0.8 fC/pF, n = 18) (Fig. 3d). This reduction of Qmax is similar to the reduction of the maximal T-type conductance we previously observed (32%, Fig. 2c), suggesting that the decrease of the T-type conductance in cells expressing the P1210L channel variant is likely caused by a reduced expression of the channel in the plasma membrane. This notion is further supported by the observation that neither the Gmax/Qmax dependency (Fig. 3e), nor the kinetics of charge movements (Fig. 3f), were modified, indicating that the gating properties of the P1210L channel variant remained unaltered. In contrast, we did not detect any charge movement in cells expressing the ΔI153 channel variant (Fig. 3c and d), suggesting that despite being biochemically expressed, this variant is not present in the plasma membrane.
Expression of Cav3.2 P1210L and ΔI153 variants. a Representative immunoblot of Cav3.2 from tsA-201 cells expressing wild-type (WT), P1210L, and ΔI153 channel variants. b Corresponding mean expression levels of P1210L and ΔI153 variants relative to WT channels. c Representative charge movement traces recorded at the ionic reversal potential from cells expressing wild-type (WT, black trace), P1210L (blue trace), and ΔI153 (red trace) channel variants. The dotted line depicts the time course of the integral for each trace. d Corresponding mean Qmax values calculated for each investigated cell. e Corresponding mean Gmax/Qmax ratios. f Corresponding mean 10–90% rise times calculated from the integral time course shown in panel c
Altogether, these data are consistent with a mildly decreased surface expression of the P1210L variant without additional alterations. Importantly, these data demonstrate the profound deleterious effect of the ΔI153 mutation on the biogenesis and surface trafficking of Cav3.2 channels.
Dominant-negative effect of the ΔI153 channel variant
Given the heterozygosity of the ΔI153 mutation and the defective trafficking of the ΔI153 channel variant, we aimed to test whether this variant could have a dominant-negative effect on WT channels. Therefore, we co-expressed the WT and ΔI153 channels in tsA-201 cells in a 1:1 ratio (equal amount of cDNAs) and compared T-type currents with cells expressing the WT channel in combination with a cation-impermeant but trafficking-competent channel (PM). Recording of T-type currents in cells expressing a combination of WT:ΔI153 channels (Fig. 4a) revealed a 35% reduction (Mann-Whitney p = 0.0080) of the maximal T-type conductance compared to cells expressing a combination of WT:PM channels (from 569 ± 73 pS/pF, n = 38 to 372 ± 27 pS/pF, n = 58) (Fig. 4b and c), indicating that the ΔI153 variant produced a dominant-negative effect on the WT channel when expressed in trans. In contrast, the voltage-dependence of activation and inactivation remained unaltered. Given the comparatively mild phenotype produced by the P1210L mutation, the P1210L variant was not tested in combination with the WT channel. Finally, to test whether this dominant-negative effect could be mediated by an interaction between Cav3.2 subunits, we performed co-immunoprecipitations from tsA-201 cells co-expressing Myc-tagged and GFP-tagged Cav3.2 to discriminate between the two channels. We observed that the GFP-tagged Cav3.2 was immunoprecipitated with the Myc-tagged Cav3.2 using a specific anti-Myc antibody, revealing the ability of Cav3.2 channels to dimerize (Fig. 4d).
Electrophysiological characterization of Cav3.2 WT and ΔI153 expressed in trans. a Representative T-type current traces recorded from tsA-201 cells expressing WT channels in combination with either the ΔI153 variant (WT:ΔI153, red traces) or the cation-impermeant but trafficking-competent Cav3.2 pore mutant (WT:PM; grey traces) in a ratio 1:1. b Corresponding mean current-voltage relationship (I/V) for WT:ΔI153 (black|red circles), and WT:PM (black|grey circles) conditions. c Corresponding mean maximal macroscopic conductance (Gmax). d Co-immunoprecipitation of Cav3.2 from tsA-201 cells co-transfected with a Myc-tagged and GFP-tagged Cav3.2. The left panel shows the result of the co-immunoprecipitation of Myc-Cav3.2 with GFP-Cav3.2 using an anti-Myc antibody. The middle and right panels show the immunoblot of GFP-Cav3.2 and Myc-Cav3.2 using and anti-GFP and anti-Myc antibody, respectively
Collectively, these data revealed the dominant-negative effect of the ΔI153 variant on the WT channel, a phenomenon likely to be mediated by the interaction between Cav3.2 subunits.
While several common genes are implicated in familial ALS, the occurrence of rare genetic variants in patients with no family history of the disease has emerged as a potential contributing factor in sporadic ALS [11]. In this study, we report two heterozygous CACNA1H variants identified by whole genome sequencing of a small cohort of ALS patients. Functional analysis revealed mild to severe alterations of Cav3.2 variants that were consistent with a loss-of-function of the channels.
The P1210L missense mutation was located in a variable region of Cav3.2 and was not predicted to be deleterious. Our electrophysiological analysis showed a moderate reduction of the expression of the P1210L channel variant at the cell surface and an associated reduction in the T-type conductance. We cannot entirely rule out the possibility that the phenotypic expression of the P1210L variant could have differed when introduced into a different Cav3.2 splice variant [18], or when functionally assessed under different experimental conditions [19], but our experimental data together with the relatively high occurrence of this variant in the general population strongly suggest that it is indeed unlikely to be pathogenic. In contrast, the ΔI153 variant had never been reported and was predicted to be deleterious. Electrophysiological analysis revealed a complete loss of functional expression of the ΔI153 variant, and recording of charge movements suggested that this variant was absent from the cell surface. Furthermore, our biochemical analysis revealed a dramatic decrease of the expression level of the channel protein, suggesting that this variant may have undergone extensive degradation. Of particular importance was the dominant-negative effect produced by the ΔI153 variant on the WT channel when the two channels were expressed in trans. This effect was likely to be mediated by the ability of Cav3.2 subunits to dimerize, which could have prevented the proper trafficking of the WT channel to the cell surface in the presence of the impaired ΔI153 variant. In this regard, it is worth considering that this dominant-negative effect may also have an effect on other ion channels. Indeed, Cav3.2 channels are known to biochemically interact with several calcium- and voltage-activated potassium and sodium channel subunits [20,21,22,23] whose surface trafficking and activity could be affected by the Cav3.2 ΔI153 variant.
The molecular mechanisms underlying the deleterious effect of the ΔI153 variant can be appreciated by examining the 3-dimensional environment of I153, and the possible impact of its deletion in the homology model of Cav3.2 we have developed, using the 3.3 Å CryoEM structure of Cav3.1 [24]. In this model, I153 is located within the transmembrane S2 alpha helix of domain I (Fig. 5a), where it is surrounded by hydrophobic residues near the membrane-cytosol interface (Fig. 5b). The nearby hydrophobic residues are highly conserved between L- and T-type channels and I153 shows a clear involvement in the helical packing (Fig. 5b). Therefore, deletion of I153 that results in a net loss of hydrophobicity within the transmembrane segment is likely to alter helix packing in domain I which would result in the misfolding of the channel. Additionally, deletion of I153 would also affect downstream residues in the helix due to a change in the helical register, thus further affecting the helical packing in the voltage-sensing domain.
Homology model of human Cav3.2. a Cartoon representation of secondary structural elements of human Cav3.2 (Uniprot O95180) homology model (residues 97–1974) based on Cav3.1 (PDB: 6KZO), showing side (left panel) and bottom (right panel) views of the channel. The four domains of Cav3.2 are colored in red, yellow, blue and green. The S1-S6 helices are indicated in red for domain I. Some of the flexible loops connecting the transmembrane helices are not shown, or could not be modeled, due to poor model accuracy or lack of structural information, respectively. The isoleucine 153 (Ile153) is shown in black. b Stereo diagram of Ile153 and nearby hydrophobic residues showing its involvement in the helical packing
From a clinical point of view, the loss-of-channel function associated with the ΔI153 variant could have several pathological implications. First, Cav3.2 is present in several central neurons, including reticular thalamic neurons [25], where they contribute to NMDA receptor-mediated synaptic transmission [26]. Given that gain-of-function mutations associated with childhood absence epilepsy were shown to enhance synaptic activities [26], the reciprocal theory would suggest that loss-of-channel function could, in contrast, decrease synaptic transmission. Along these lines, neuroimaging studies have revealed decreased thalamic activity in ALS [27,28,29,30,31,32], and a recent MRI study reported alterations of thalamic connectivities that mirrored the progressive motor functional decline in ALS [33]. Second, although the functional expression of Cav3.2 in mammalian motor neurons remains elusive, several studies suggest that T-type channels may have a functional role. For instance, Cav3.1 channels are present in turtle spinal motor neurons where they contribute to cellular excitability [34]. In addition, a low-threshold voltage-activated calcium conductance was reported at nodes of Ranvier in mouse spinal motor neurons, suggesting the presence of T-type channels [35]. Third, a T-type channel ortholog is present in motor neurons of the nematode C. elegans [36] where it contributes to motor-related functions [37, 38]. Finally, a recent study documented the role of T-type channels in the maintenance of neuronal progenitor cells [39]. A loss-of-function of Cav3.2 could compromise the architecture of nerve cells and precipitate neuronal degeneration.
In conclusion, this newly identified ΔI153 variant is the first to be reported to cause a complete loss of Cav3.2 channel function [40]. Although its pathogenic role in the context of ALS remains to be established, these findings add to the notion that rare CACNA1H variants represent a risk factor for ALS. Furthermore, several T-type channels blockers are currently being used for the treatment of epilepsy [41]. The question then arises as to whether long term use of these molecules may present a risk to the development of ALS. This notion should be given particular attention, especially considering that several other T-type channel blockers are currently evaluated in clinical trials for the management of epilepsy and chronic pain symptoms.
Plasmids cDNA constructs and site-directed mutagenesis
The Cav3.2 P1210L and ΔI153 channel variants were created by introducing the respective mutations into the human wild-type HA-tagged Cav3.2 in pcDNA3.1 [42] by PCR-based site-directed mutagenesis using Q5® Site-Directed Mutagenesis Kit (New England Biolabs) and the following mutagenic primers: delI153: 5′-TCAAGATGGTGGCCTTGG-3′ (forward) and 5′-CCATCTCCACCGCAAAAAAG-3′ (reverse); P1210L: 5′-GCCGCCCTCCtGCCTACCAAGTGC-3′ (forward) and 5′-CGGCCGCAGGGGCCGTGG-3′ (reverse). The cation-impermeant Cav3.2 channel was generated by replacing the glutamic acid 378 in domain I with a lysine (E378K) by site-directed mutagenesis. Final constructs were verified by sequencing of the coding sequence of the plasmid cDNAs.
Cell culture and heterologous expression
Human embryonic kidney tsA-201 cells were grown in DMEM medium supplemented with 10% fetal bovine serum and 1% penicillin/streptomycin (all media purchased from Invitrogen) and maintained under standard conditions at 37 °C in a humidified atmosphere containing 5% CO2. Heterologous expression of Cav3.2 channels was performed by transfecting cells with 5 μg plasmid cDNAs encoding for Cav3.2 channel variants using the calcium/phosphate method. For experiments aiming at investigating the dominant negative effect of the ΔI153 variant, cells were co-transfected with 2.5 μg plasmid cDNA encoding for WT channels with either 2.5 μg plasmid cDNA encoding for the ΔI153 channel variant or 2.5 μg plasmid cDNA encoding for a non-conducting but trafficking-competent Cav3.2 (PM).
Patch clamp electrophysiology
Patch clamp recordings of T-type currents in tsA-201 cells expressing Cav3.2 channel variants were performed 72 h after transfection in the whole-cell configuration at room temperature (22-24 °C) as previously described [43]. The bath solution contained (in millimolar): 5 BaCl2, 5 KCl, 1 MgCl2, 128 NaCl, 10 TEA-Cl, 10 D-glucose, 10 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES) (pH 7.2 with NaOH). Patch pipettes were filled with a solution containing (in millimolar): 110 CsCl, 3 Mg-ATP, 0.5 Na-GTP, 2.5 MgCl2, 5 D-glucose, 10 EGTA, and 10 HEPES (pH 7.4 with CsOH), and had a resistance of 2–4 MΩ. Recordings were performed using an Axopatch 200B amplifier (Axon Instruments) and acquisition and analysis were performed using pClamp 10 and Clampfit 10 software, respectively (Axon Instruments). The linear leak component of the current was corrected online and current traces were digitized at 10 kHz and filtered at 2 kHz. The voltage dependence of activation of Cav3.2 channels was determined by measuring the peak T-type current amplitude in response to 150 ms depolarizing steps to various potentials applied every 10 s from a holding membrane potential of − 100 mV. The current-voltage relationship (I/V) curve was fitted with the following modified Boltzmann eq. (1):
$$ I(V)= Gmax\ \frac{\left(V-V\mathrm{rev}\right)}{1+\exp \frac{\left(V0.5-V\right)\ }{k}} $$
with I(V) being the peak current amplitude at the command potential V, Gmax the maximum conductance, Vrev the reversal potential, V0.5 the half-activation potential, and k the slope factor. The voltage dependence of the whole-cell Ba2+ conductance was calculated using the following modified Boltzmann eq. (2):
$$ G(V)=\frac{Gmax}{1+\exp \frac{\left(V0.5-V\right)\ }{k}} $$
with G(V) being the Ba2+ conductance at the command potential V.
The voltage dependence of the steady-state inactivation of Cav3.2 channels was determined by measuring the peak T-type current amplitude in response to a 150 ms depolarizing step to − 20 mV applied after a 5 s-long conditioning prepulse ranging from − 120 mV to − 30 mV. The current amplitude obtained during each test pulse was normalized to the maximal current amplitude and plotted as a function of the prepulse potential. The voltage dependence of the steady-state inactivation was fitted with the following two-state Boltzmann function (3):
$$ I(V)=\frac{Imax}{1+\exp \frac{\left(V-V0.5\right)\ }{k}} $$
with Imax corresponding to the maximal peak current amplitude and V0.5 to the half-inactivation voltage.
The recovery from inactivation was assessed using a double-pulse protocol from a holding potential of − 100 mV. The cell membrane was depolarized for 2 s at 0 mV (inactivating prepulse) to ensure complete inactivation of the channel, and then to − 20 mV for 150 ms (test pulse) after an increasing time period (interpulse) ranging between 0.1 ms and 2 s at − 100 mV. The peak current from the test pulse was plotted as a ratio of the maximum prepulse current versus interpulse interval. The data were fitted with the following single-exponential function (4):
$$ \frac{I}{Imax}=A\times \left(1-\mathit{\exp}\frac{-t}{\tau}\right) $$
where τ is the time constant for channel recovery from inactivation.
Measurement of charge movements
Recording of charge movements was performed 72 h after transfection as previously described [44, 45]. The bath solution contained (in millimolar): CsCl 95; TEACl 40, BaCl2 5; MgCl2 1; HEPES 10; glucose 10; pH 7.4 (adjusted with CsOH). Patch pipettes had a resistance ranging from 1.8 MΩ to 2.2 MΩ when filled with a solution containing (in millimolar): CH3SO3Cs 130; Na-ATP 5; TEACl 10; HEPES 10; EGTA 10; MgCl2 5; pH 7.4 (adjusted with CsOH). Osmolarity of the intracellular solution was approximately 300 mOsmol/L. Osmolarity of the extracellular solution was adjusted by adding sucrose so that the final value was about 2–3 mOsmol/L lower than the osmolarity of the corresponding intracellular solution. Recordings were performed using HEKA EPC10 amplifier (HEKA Electronics). Acquisition and analysis were performed using Patchmaster v90.2 and Fitmaster v2x73.1 and Origin Pro 2015 software, respectively. Only cells with an input resistance less than 5 MΩ were considered. The input resistance and capacity transients were compensated by up to 70% with in-built circuits of the EPC 10 amplifier. Remaining artifacts were subtracted using a -P/8 procedure. ON-gating currents were recorded in response to a series of 5 depolarizing pulses at the reversal potential of the ionic current assessed for each cell, and total gating charge QON was calculated as the integral of area below the averaged current traces.
CRISPR/Cas9 genome editing in DRG neurons
Male rats (6-week-old) were purchased from Charles River and DRG neurons were harvested as described previously [46]. The next day, neurons were transfected with Crispr-Cas9 plasmids (Cas9-sgRNA plasmid and donor plasmid purchased from GeneCopoeia) using Lipofectamine 2000 from Invitrogen (Cat. 11,668–019). The sequence of Crispr RNA was CGTGGAGATGGTGATCAAGA. The donor plasmid contained the homologous arms of the genomic DNA without I153. Whole-cell voltage-clamp recordings of T-type currents were performed 3 days post transfection. The external solution contained (in mM): 40 TEACl, 65 CsCl, 20 BaCl2, 1 MgCl2, 10 HEPES, 10 D-glucose, pH 7.4. The internal solution contained (in mM): 140 CsCl, 2.5 CaCl2, 1 MgCl2, 5 EGTA, 10 HEPES, 2 Na-ATP, 0.3 Na-GTP, pH 7.3. We used GFP fluorescence to specifically identify neurons that were transfected with the CRISPR plasmids. The overall percentage of GFP positive neurons in a dish was relatively low, and hence we cannot use bulk genomic sequencing for verification. However, given the large functional effect on current densities, we are confident that the use of GFP fluorescence is an appropriate means of identifying neurons that were targeted with these plasmids. We specifically targeted medium diameter neurons for our analysis. The mean capacitance of the neurons that we recorded from was 24.79 ± 4.40 pF for control neurons versus 21.89 ± 1.31 pF for CRISPR-edited neurons.
SDS-PAGE and immunoblot analysis
Immunoblot of HA-tagged Cav3.2 channel was performed as previously described [16]. Briefly, total cell lysate from tsA-201 cells expressing HA-Cav3.2 channels was separated on a 5–20% gradient SDS-PAGE gel and transferred onto PVDF membrane (Millipore). Detection of HA-Cav3.2 was performed using a primary rat monoclonal anti-HA antibody (1:1000, Roche) and secondary HRP-conjugated antibody (1:10,000, Jackson ImmunoResearch). Immunoreactive products were detected by enhanced chemiluminescence and analyzed using ImageJ software.
For co-immunoprecipitation, cell lysates containing GFP-tagged and Myc-tagged Cav3.2 were incubated for 3 h with a biotinylated mouse monoclonal anti-Myc antibody (Santa Cruz Biotechnology), and then for 45 min with streptavidin beads (Invitrogen) at 4 °C, and washed with PBS/Tween-20 buffer. Beads were resuspended in Laemmli buffer and immunoprecipitation samples were separated on SDS-PAGE gel.
Generation of human Cav3.2 homology model
The homology model of the human Cav3.2 channel was prepared using the Cav3.1 structure as a template (PDB: 6KZO) in conjunction with Swiss-Model server (https://swissmodel.expasy.org/) [47]. Figures were prepared using Pymol (v2.2 Schrödinger, LLC.).
Data values are presented as mean ± SEM for n measurements. Statistical analysis was performed using GraphPad Prism 7. For datasets passing the D'Agostino & Pearson omnibus normality test, statistical significance was determined using either Student's t-test or a Mann-Whitney test. Datasets were considered significantly different for p ≤ 0.05.
The data used and/or analyzed during the current study are available from the corresponding author on reasonable request.
ALS:
DRG:
Dorsal root ganglia
GFP:
G max :
Maximal macroscopic conductance
MRI:
Pore mutant
Q max :
Maximal charge movements
Q rev :
Charge movements at the reversal calcium potential
Wild-type
Marin B, Fontana A, Arcuti S, Copetti M, Boumédiene F, Couratier P, et al. Age-specific ALS incidence: a dose-response meta-analysis. Eur J Epidemiol. 2018;33(7):621–34.
Marin B, Boumédiene F, Logroscino G, Couratier P, Babron MC, Leutenegger AL, et al. Variation in worldwide incidence of amyotrophic lateral sclerosis: a meta-analysis. Int J Epidemiol. 2017;46(1):57–74.
Hardiman O, Al-Chalabi A, Chio A, Corr EM, Logroscino G, Robberecht W, et al. Amyotrophic lateral sclerosis. Nat Rev Dis Primers. 2017;3:17071.
Talbot K. Familial versus sporadic amyotrophic lateral sclerosis--a false dichotomy. Brain. 2011;134(Pt 12):3429–31.
Nguyen HP, Van Broeckhoven C, van der Zee J. ALS genes in the genomic era and their implications for FTD. Trends Genet. 2018;34(6):404–23.
Gibson SB, Downie JM, Tsetsou S, Feusier JE, Figueroa KP, Bromberg MB, et al. The evolving genetic risk for sporadic ALS. Neurology. 2017;89(3):226–33.
Sproviero W, Shatunov A, Stahl D, Shoai M, van Rheenen W, Jones AR, et al. ATXN2 trinucleotide repeat length correlates with risk of ALS. Neurobiol Aging. 2017;51:178.e1–9.
van Es MA, Veldink JH, Saris CG, Blauw HM, van Vught PW, Birve A, et al. Genome-wide association study identifies 19p13.3 (UNC13A) and 9p21.2 as susceptibility loci for sporadic amyotrophic lateral sclerosis. Nat Genet. 2009;41(10):1083–7.
Greenway MJ, Andersen PM, Russ C, Ennis S, Cashman S, Donaghy C, et al. ANG mutations segregate with familial and 'sporadic' amyotrophic lateral sclerosis. Nat Genet. 2006;38(4):411–3.
Corcia P, Camu W, Halimi JM, Vourc'h P, Antar C, Vedrine S, et al. SMN1 gene, but not SMN2, is a risk factor for sporadic ALS. Neurology. 2006;67(7):1147–50.
Steinberg KM, Yu B, Koboldt DC, Mardis ER, Pamphlett R. Exome sequencing of case-unaffected-parents trios reveals recessive and de novo genetic variants in sporadic ALS. Sci Rep. 2015;5:9124.
Rzhepetskyy Y, Lazniewska J, Blesneac I, Pamphlett R, Weiss N. CACNA1H missense mutations associated with amyotrophic lateral sclerosis alter Cav3.2 T-type calcium channel activity and reticular thalamic neuron firing. Channels (Austin). 2016;10(6):466–77.
Meltz Steinberg K, Nicholas TJ, Koboldt DC, Yu B, Mardis E, Pamphlett R. Whole genome analyses reveal no pathogenetic single nucleotide or structural differences between monozygotic twins discordant for amyotrophic lateral sclerosis. Amyotroph Lateral Scler Frontotemporal Degener. 2015;16(5–6):385–92.
Nitrini R. Frontotemporal dementia and amyotrophic lateral sclerosis: revisiting one of the first case reports with neuropathology examination. Dement Neuropsychol. 2014;8(1):83–6.
Rzhepetskyy Y, Lazniewska J, Proft J, Campiglio M, Flucher BE, Weiss N. A Cav3.2/Stac1 molecular complex controls T-type channel expression at the plasma membrane. Channels (Austin). 2016;10(5):346–54.
Proft J, Rzhepetskyy Y, Lazniewska J, Zhang FX, Cain SM, Snutch TP, et al. The Cacna1h mutation in the GAERS model of absence epilepsy enhances T-type Ca2+ currents by altering calnexin-dependent trafficking of Cav3.2 channels. Sci Rep. 2017;7(1):11513.
Bourinet E, Alloui A, Monteil A, Barrère C, Couette B, Poirot O, et al. Silencing of the Cav3.2 T-type calcium channel gene in sensory neurons demonstrates its major role in nociception. EMBO J. 2005;24(2):315–24.
Powell KL, Cain SM, Ng C, Sirdesai S, David LS, Kyi M, et al. A Cav3.2 T-type calcium channel point mutation has splice-variant-specific effects on function and segregates with seizure expression in a polygenic rat model of absence epilepsy. J Neurosci. 2009;29(2):371–80.
Souza IA, Gandini MA, Wan MM, Zamponi GW. Two heterozygous Cav3.2 channel mutations in a pediatric chronic pain patient: recording condition-dependent biophysical effects. Pflugers Arch. 2016;468(4):635–42.
Anderson D, Mehaffey WH, Iftinca M, Rehak R, Engbers JD, Hameed S, et al. Regulation of neuronal activity by Cav3-Kv4 channel signaling complexes. Nat Neurosci. 2010;13(3):333–7.
Engbers JD, Anderson D, Asmara H, Rehak R, Mehaffey WH, Hameed S, et al. Intermediate conductance calcium-activated potassium channels modulate summation of parallel fiber input in cerebellar Purkinje cells. Proc Natl Acad Sci U S A. 2012;109(7):2601–6.
Rehak R, Bartoletti TM, Engbers JD, Berecki G, Turner RW, Zamponi GW. Low voltage activation of KCa1.1 current by Cav3-KCa1.1 complexes. PLoS One. 2013;8(4):e61844.
Garcia-Caballero A, Gandini MA, Huang S, Chen L, Souza IA, Dang YL, et al. Cav3.2 calcium channel interactions with the epithelial sodium channel ENaC. Mol Brain. 2019;12(1):12.
Zhao Y, Huang G, Wu Q, Wu K, Li R, Lei J, et al. Cryo-EM structures of apo and antagonist-bound human Cav3.1. Nature. 2019;576(7787):492–7.
Talley EM, Cribbs LL, Lee JH, Daud A, Perez-Reyes E, Bayliss DA. Differential distribution of three members of a gene family encoding low voltage-activated (T-type) calcium channels. J Neurosci. 1999;19(6):1895–911.
Wang G, Bochorishvili G, Chen Y, Salvati KA, Zhang P, Dubel SJ, et al. CaV3.2 calcium channels control NMDA receptor-mediated transmission: a new mechanism for absence epilepsy. Genes Dev. 2015;29(14):1535–51.
Turner MR, Cagnin A, Turkheimer FE, Miller CC, Shaw CE, Brooks DJ, et al. Evidence of widespread cerebral microglial activation in amyotrophic lateral sclerosis: an [11C](R)-PK11195 positron emission tomography study. Neurobiol Dis. 2004;15(3):601–9.
Chang JL, Lomen-Hoerth C, Murphy J, Henry RG, Kramer JH, Miller BL, et al. A voxel-based morphometry study of patterns of brain atrophy in ALS and ALS/FTLD. Neurology. 2005;65(1):75–80.
Sharma KR, Saigal G, Maudsley AA, Govind V. 1H MRS of basal ganglia and thalamus in amyotrophic lateral sclerosis. NMR Biomed. 2011;24(10):1270–6.
Sharma KR, Sheriff S, Maudsley A, Govind V. Diffusion tensor imaging of basal ganglia and thalamus in amyotrophic lateral sclerosis. J Neuroimaging. 2013;23(3):368–74.
Bede P, Elamin M, Byrne S, McLaughlin RL, Kenna K, Vajda A, et al. Basal ganglia involvement in amyotrophic lateral sclerosis. Neurology. 2013;81(24):2107–15.
Menke RA, Körner S, Filippini N, Douaud G, Knight S, Talbot K, et al. Widespread grey matter pathology dominates the longitudinal cerebral MRI and clinical landscape of amyotrophic lateral sclerosis. Brain. 2014;137(Pt 9):2546–55.
Tu S, Menke RAL, Talbot K, Kiernan MC, Turner MR. Regional thalamic MRI as a marker of widespread cortical pathology and progressive frontotemporal involvement in amyotrophic lateral sclerosis. J Neurol Neurosurg Psychiatry. 2018;89(12):1250–8.
Canto-Bustos M, Loeza-Alcocer E, González-Ramírez R, Gandini MA, Delgado-Lezama R, Felix R. Functional expression of T-type Ca2+ channels in spinal motoneurons of the adult turtle. PLoS One. 2014;9:e108187.
Zhang Z, David G. Stimulation-induced Ca (2+) influx at nodes of Ranvier in mouse peripheral motor axons. J Physiol. 2016;594(1):39–57.
Shtonda B, Avery L. CCA-1, EGL-19 and EXP-2 currents shape action potentials in the Caenorhabditis elegans pharynx. J Exp Biol. 2005;208(Pt 11):2177–90.
Steger KA, Shtonda BB, Thacker C, Snutch TP, Avery L. The C. elegans T-type calcium channel CCA-1 boosts neuromuscular transmission. J Exp Biol. 2005;208(Pt 11):2191–203.
Nicoletti M, Loppini A, Chiodo L, Folli V, Ruocco G, Filippi S. Biophysical modeling of C. elegans neurons: Single ion currents and whole-cell dynamics of AWCon and RMD. PLoS One. 2019;14(7):e0218738.
Kim JW, Oh HA, Lee SH, Kim KC, Eun PH, Ko MJ, et al. T-type calcium channels are required to maintain viability of neural progenitor cells. Biomol Ther (Seoul). 2018;26(5):439–45.
Weiss N, Zamponi GW. Genetic T-type calcium channelopathies. J Med Genet. 2020;57(1):1–10.
Weiss N, Zamponi GW. T-type calcium channels: from molecule to therapeutic opportunities. Int J Biochem Cell Biol. 2019;108:34–9.
Dubel SJ, Altier C, Chaumont S, Lory P, Bourinet E, Nargeot J. Plasma membrane expression of T-type calcium channel alpha (1) subunits is modulated by high voltage-activated auxiliary subunits. J Biol Chem. 2004;279(28):29263–9.
Carter MT, McMillan HJ, Tomin A, Weiss N. Compound heterozygous CACNA1H mutations associated with severe congenital amyotrophy. Channels (Austin). 2019;13(1):153–61.
Ondacova K, Karmazinova M, Lazniewska J, Weiss N, Lacinova L. Modulation of Cav3.2 T-type calcium channel permeability by asparagine-linked glycosylation. Channels (Austin). 2016;10(3):175–84.
Jurkovicova-Tarabova B, Cmarko L, Rehak R, Zamponi GW, Lacinova L, Weiss N. Identification of a molecular gating determinant within the carboxy terminal region of Cav3.3 T-type channels. Mol Brain. 2019;12(1):34.
Altier C, Khosravani H, Evans RM, Hameed S, Peloquin JB, Vartian BA, et al. ORL1 receptor-mediated internalization of N-type calcium channels. Nat Neurosci. 2006;9(1):31–40.
Waterhouse A, Bertoni M, Bienert S, Studer G, Tauriello G, Gumienny R, et al. SWISS-MODEL: homology modelling of protein structures and complexes. Nucleic Acids Res. 2018;46(W1):W296–303.
We thank the patients and family members for their contribution to this study.
N.W. is supported by the Institute of Organic Chemistry and Biochemistry. L.L. is supported by a grant VEGA 2/0143/19. N.W. and L.L. are supported by a bilateral SAS-CAS project (SAV-18-22). G.W.Z. is supported by a grant from the Natural Sciences and Engineering Research Council and holds a Canada Research Chair.
Institute of Organic Chemistry and Biochemistry, Czech Academy of Sciences, Flemingovo nam 2, 16610, Prague, Czech Republic
Robin N. Stringer, Romane Idoux, Anna Liashenko, Yuriy Rzhepetskyy & Norbert Weiss
Third Faculty of Medicine, Charles University, Prague, Czech Republic
Robin N. Stringer
Center of Biosciences, Institute of Molecular Physiology and Genetics, Academy of Sciences, Bratislava, Slovakia
Bohumila Jurkovicova-Tarabova & Lubica Lacinova
Department of Physiology and Pharmacology, Cumming School of Medicine, University of Calgary, Calgary, Canada
Sun Huang, Ivana A. Souza & Gerald W. Zamponi
Department of Biochemistry and Molecular Biology, University of British Columbia, Vancouver, Canada
Omid Haji-Ghassemi & Filip Van Petegem
Discipline of Pathology, Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
Roger Pamphlett
Bohumila Jurkovicova-Tarabova
Sun Huang
Omid Haji-Ghassemi
Romane Idoux
Anna Liashenko
Ivana A. Souza
Yuriy Rzhepetskyy
Lubica Lacinova
Filip Van Petegem
Gerald W. Zamponi
Norbert Weiss
N.W., L.L., F.V.P., G.W.Z., and R.P. designed and conceptualized the study. R.N.S., R.I., B.J.T., O.H., S.H., I.A.S., A.L., Y.R., and N.W. collected data, performed analysis and interpreted the results. N.W. and R.P. wrote the manuscript. All authors critically revised the manuscript and contributed significantly to this work. The authors read and approved the final manuscript.
Correspondence to Norbert Weiss.
The whole genome sequencing of white blood cell DNA that gave rise to the finding of the genetic variants further characterised in the present study was undertaken by RP in a joint University of Sydney (Australia) and the Genome Institute Washington University (St Louis, USA) project using DNA samples from the Australian Motor Neuron Disease DNA Bank, with approval from the Sydney South West Area Health Service Human Research Ethics Committee. Informed written consent was obtained from each individual for their DNA to be used for research purposes.
Stringer, R.N., Jurkovicova-Tarabova, B., Huang, S. et al. A rare CACNA1H variant associated with amyotrophic lateral sclerosis causes complete loss of Cav3.2 T-type channel activity. Mol Brain 13, 33 (2020). https://doi.org/10.1186/s13041-020-00577-6
Motor neuron disease
CACNA1H
Cav3.2 channel
T-type channel | CommonCrawl |
A Study on Residual Compression Behavior of Structural Fiber Reinforced Concrete Exposed to Moderate Temperature Using Digital Image Correlation
G. Srikar1,
G. Anand1 &
S. Suriya Prakash1
International Journal of Concrete Structures and Materials volume 10, pages 75–85 (2016)Cite this article
Fire ranks high among the potential risks faced by most buildings and structures. A full understanding of temperature effects on fiber reinforced concrete is still lacking. This investigation focuses on the study of the residual compressive strength, stress strain behavior and surface cracking of structural polypropylene fiber-reinforced concrete subjected to temperatures up to 300 °C. A total of 48 cubes was cast with different fiber dosages and tested under compression after exposing to different temperatures. Concrete cubes with varying macro (structural) fiber dosages were exposed to different temperatures and tested to observe the stress–strain behavior. Digital image correlation, an advanced non-contacting method was used for measuring the strain. Trends in the relative residual strengths with respect to different fiber dosages indicate an improvement up to 15 % in the ultimate compressive strengths at all exposure temperatures. The stress–strain curves show an improvement in post peak behavior with increasing fiber dosage at all exposure temperatures considered in this study.
Fiber reinforced concrete (FRC) is a concrete mix containing water, cement, fine and coarse aggregates and discontinuous fibers. The addition of fibers to concrete has been shown to enhance the toughness of concrete. There are many kinds of fibers, both metallic and polymeric, which have been used in concrete to improve specific engineering properties of the concrete. A concrete beam containing metallic/synthetic fibers suffers damage by gradual development of single or multiple cracks with increasing deflection, but retains some degree of structural integrity and post-crack resistance even under considerable deflection (Bentur and Mindess 1990). A similar beam without fibers fails suddenly at a small deflection by separating it into two pieces.
Synthetic fibers have attracted attention in the recent years for reinforcing cementitious materials. Polypropylene fibers belong to the category of synthetic fibers and are available in two different forms: (a) monofilaments and (b) fibrillated. Monofilament fibers are single strand of fibers having uniform cross-sectional area. Fibrillated fibers are manufactured in the form of films or tapes that are slit in such a way that they have a net like physical structure. Polypropylene fibers are also categorized as micro-synthetic or macro-synthetic (structural) fibers. Micro-synthetic fibers are typically 12 mm long and 18 μm in diameter, whereas the macro ones are significantly larger with 40–50 mm lengths and 1.0–1.5 mm wide. Micro synthetic fibers are found to be effective in reducing crack formation at an early stage of the cast and in severe weather conditions (e.g. in dry climatic zones). In the recent years, the usage of macro synthetic (structural) fibers has significantly increased for reducing the usage of normal steel reinforcement.
Fire ranks high among the potential risks faced by most buildings and structures. Compressive strength of concrete decreases about a quarter of its room temperature strength within the range of 200–400 °C (Cheng et al. 2004). An assessment of the degree of deterioration of concrete after exposure to high temperatures can help engineers to decide whether it can be repaired or demolished. There is a need to fully understand the effects of elevated temperatures on concrete. In fiber reinforced concrete (FRC), elevated temperature causes synthetic fibers to melt and thereby increasing the porosity in concrete. Increase in porosity result in the escape of vapor pressure and thereby lowers the risk of spalling when exposed to elevated temperatures. Moreover, influence of structural fiber reinforcement on residual properties of concrete exposed to different temperatures is relatively not well understood. This paper aims to present the effects of moderate temperature exposure [up to 300 °C] on the compressive stress–strain behavior of concrete reinforced with structural polypropylene fibers. An experimental program was designed and carried out involving compressive load testing of 48 concrete cubes with temperature exposure and fiber dosage as the test variables.
Behavior of FRC
Many kinds of fibers including metallic and polymeric is widely used in concrete for their advantages (Soroushian et al. 1992; Song et al. 2005; Alberti et al. 2014). Previous research has shown that no single fiber-reinforced concrete has the perfect mechanical properties. There are several properties that a good reinforcing fiber must have which influence the mechanical behavior of fiber-reinforced concrete like its tensile strength, ductility, high elastic modulus, elasticity, and Poisson's ratio. Fibers must be much stronger than the concrete matrix in tension, since the load bearing area is much less than the matrix. The proportion of the load carried by the fiber depends directly upon the comparative elastic modulus of the fiber and concrete matrix. If the elastic modulus of the fibers is less than that of the concrete matrix, the fibers will contribute relatively little to the concrete behavior until after cracking.
Recently, many researchers have investigated the mechanical properties of the hybrid fiber-reinforced concrete (Mobasher and Li 1996; Alberti et al. 2014). Aslani and Nejadi (2013) explored the properties of self-compacting concrete (SCC) with fiber reinforcement. They developed a test program to understand the mechanical properties including compressive and splitting tensile strengths, moduli of elasticity and rupture, compressive stress–strain curve and energy dissipation under compression. They investigated four different SCC mixes including (i) plain SCC, (ii) steel, (iii) polypropylene and (iv) hybrid fiber reinforced SCC. Experimental investigation and analytical study were performed to develop a simple and rational mathematical model for the prediction of the mechanical properties which were found to be quite comparable with the test data.
Behavior of FRC Under Compression and Flexure
Compressive tests showed that the fibers in concrete had only a marginal effect on the compressive strength of concrete (Olivito and Zuccarello 2010; Soulioti et al. 2011). However, studies on compression behavior of synthetic fiber reinforced concrete is very limited. Contradictory test results have been reported by different investigators regarding the effects of polypropylene fibers (micro fibers) on the compressive and flexural strengths of concrete. Differences in results may have been caused by the differences in matrix composition, polypropylene fiber type and volume fraction, and manufacturing conditions. Hughes and Fattuhi (1976) reported that compressive strength decreases, but flexural properties are improved with increasing synthetic fiber content. This is practical because a considerable part of the matrix is replaced with a weaker material. In addition, insufficient compaction due to reduced slump may be the reason for the decline in strength values (Soroushian et al. 1992). Ahmed and Imran (2006) reported that the compressive strength and tensile strength of concrete reinforced with polypropylene fibers are not significantly affected, if the fiber inclusion is limited to very low volume percentages. However, at higher fiber dosages, the compressive strength was found to be adversely affected.
Few studies in the past have reported that strength increases with respect to synthetic fiber dosage (Mindess and Vondran 1988; Song et al. 2005; Alberti et al. 2014). The authors have observed that the improvements in compressive strength came principally from the fibers interacting with the advancing cracks. At increased compression loads, the fibrous concrete cylinders develops lateral tension due to Poisson's effect, thus initiating cracks and advancing those cracks. The debonding at the fiber–matrix interface happens due to the tensile stresses perpendicular to the expected path of the advancing crack. As the advancing crack finally reaches the interface, the tip of the crack encountered a process of blunting because of the already present debonding crack. The blunting process reduces the crack-tip stress concentration, thus blocking the forward propagation of the crack and even diverting the path of the crack. The blunting, blocking, and even diverting of the crack allows the fibrous concrete cylinders to withstand additional compressive load and thereby increasing its compressive strength over the nonfibrous control concrete.
Mechanical properties including compressive and tensile strength of fiber-reinforced (both steel and synthetic) concrete have been relatively studied well in the last decades (Barros and Figueiras 1999; Olivito and Zuccarello 2010; Soulioti et al. 2011). Li (2002) found that the polypropylene fibers only marginally increased flexural tensile strength. However, after cracking, the fibers were found to greatly increase the ultimate strain, though the load carrying capacity is decreased. However, the fracture mechanisms and fracture energy of synthetic fiber-reinforced concrete is still a matter of interest. It is in fracture processes where the fibers absorb energy and provide ductility and toughness to the FRC. It is worth mentioning that most of the previous studies focused on fibrillated or micro fibers and not on structural/macro synthetic fibers used in this study. Moreover, the information on temperature effects on structural fiber reinforced concrete is very scarce and this study attempts to improve the understanding in this area.
Temperature Effects on Concrete With and Without Fibers
In the recent years, more attention is being paid to the mechanical and residual properties of concrete at high temperature. The high temperature causes significant physical and chemical changes, resulting in the deterioration of concrete. Concrete when exposed to elevated temperatures causes large volume changes due to thermal dilatations, thermal shrinkage and creep related water loss. Volume changes results in larger internal stresses and lead to micro-cracking and fracture. High temperatures also result in water migration, increased dehydration, interfacial thermal incompatibility and chemical decomposition of hardened cement paste. In general, all these changes lead to the decrement of the stiffness of concrete and increment in an irrecoverable deformation. Various investigations validate that depletion of strength and stiffness of concrete with increasing temperature, exposure time and thermal cycles (Poon 2004; Cheng et al. 2004; Noumowe 2005). It is observed that at approximately 100 °C, weight loss indicates water evaporation from micro pores. The dehydration of ettringite (3CaOAl2O3·3CaSO4, 32H2O) occurs between 50 and 110 °C. At 200 °C, there is further dehydration, which causes light weight loss. The weight lost at various moisture contents has been observed to be different until the local pore water and the chemically bound water are gone. Further weight loss is not perceptible at approximately 250–300 °C.
Poon (2004) investigated the effects of elevated temperatures on the compressive strength stress–strain relationship (stiffness) and energy absorption capacities (toughness) of high strength concretes with different mineral admixtures like metakaolin (MK), silica fume (SF), Fiber type (steel or Polypropylene) and fiber dosage. The authors reported that after exposure to 600–800 °C, there was 23–45 % retaining of compressive strength depending on the fiber dosage. However, they reported that losses in stiffness were much quicker than compressive strength and energy absorption. Poly-propylene (PP) fibers reduced energy absorption capacity of the concretes compared to steel fibers. Cheng et al. (2004) reported that compressive strength decreases about a quarter of its room temperature strength within the range of 100–400 °C whereas the actual depletion of strength was observed at higher temperature for fiber reinforced ones. Noumowe (2005) studied the temperature ranges of the decomposition reactions using thermo-gravimetric analysis and differential scanning calorimetric analysis. They also reported that scanning electron microscopy results validated the increase in porosity with poly-propylene fiber reinforcement at high temperatures which may result in lowering of vapor pressure hence lowering the risk of spalling when exposed to elevated temperatures.
The fibers can also improve the residual properties of concrete after exposure to elevated temperatures which qualitatively indicates the degree of deterioration caused. An assessment of the degree of deterioration of concrete structure after exposure to high temperatures can help engineers to decide whether a structure can be repaired rather than required to be demolished (Xiao and Falkner 2006). Horiguchi (2005) experimentally proved that the addition of polymeric or steel fibers alters the residual compressive strength of concrete. Specimens were heated at a rate of 10 °C/min up to 200 or 400 °C and held for 1 h at high temperature; they were then tested at room temperature. The author concluded that hybrid fibers (polypropylene and steel fibers) improve the residual compressive strength of high-strength concrete exposed to a temperature up to 400 °C. Orteu et al. (2007) used DIC for assessing the 3D orientation of the fibres on ruptured ceramic refractories to correlate the micro-mechanical model of fibre pullout with macro behavior under tension. Pain and Lamon (2005) used DIC for determining elastic moduli and Poisson coefficient of thin silicon based joints. It is worth mentioning that full-field measurements have not been applied yet for understanding the residual stress strain behavior of synthetic fiber reinforced concrete under compression and is the focus of this paper.
The literature review indicates that many studies in the past have investigated the behavior of concrete at high temperatures. There are only limited data on high temperature properties PPFRC under compression in particular the stress–strain behavior. Moreover, there is considerable variations and discrepancies in the high temperature behavior of synthetic fiber reinforced concrete. It is worth mentioning that most of the previous studies focused on fibrillated or micro fibers and not on structural synthetic fibers used in this study. Therefore, the present study tries to contribute to valuable information on stress–strain behavior of structural fiber reinforced concrete under compression exposed to moderate temperature.
The cementitious materials used in this study are ordinary portland cement (OPC) grade 53, and fly ash. Crushed granite was used as coarse aggregate with nominal sizes of 12.5 and 20 mm. The specific gravity of the coarse aggregate was 2.63 in concrete mixtures, the 12.5 and 20 mm coarse aggregates were used in the proportion of 2:3. Natural river sand, of specific gravity 2.62 was used as fine aggregate. Structural polypropylene (PP) fibers with a length, width and thickness of 60, 1.68 and 0.60 mm respectively was used (Table 1). Figure 1 shows the picture of structural PP fibers used in this study. This monofilament structural polypropylene fiber is commercially known as FibreTuff™. The fibers are made of a modified polyolefin and have a modulus of elasticity of about 10 GPa and tensile strength between 550 and 640 MPa. The fibers are continually embossed surface anchorage mechanism to enhance bonding.
Table 1 Specifications for structural polypropylene fibers.
Structural polypropylene fibers.
Mix Proportioning
Four mixes were prepared with varying fiber dosage. All mixes had similar cementitious constitutions, but with varying the fiber dosage (0, 4, 5 and 6 kg/m3). Concrete mix was designed as per IS: 10262 (2009) with a target mean strength of 43 MPa. A water/cement ratio of 0.45 was used. Cement content was fixed at 340 kg/m3 as per IS: 10262 (2009). Fine aggregates were taken as 45 % of the total aggregate volume fraction. The weights of fine and coarse aggregate were then calculated considering the specific gravities of coarse and fine aggregate. Concrete mixtures were produced at a constant water/cement ratio of 0.45 and one control mixture and three different mixtures with different dosage of fiber were prepared. The control mixture contained no fiber. A detailed overview of the mix proportions in kg/m3 is as follows: Water: 192; OPC: 298 Fly-ash: 127; River sand: 689; Fine aggregate (10 mm): 519; Coarse aggregate (20 mm): 519.
The concrete mixes were prepared in a tilting drum type mixer. Both coarse and fine aggregate was weighed and placed into the concrete mixer moistened in advance and mixed for 3 min with the addition of saturation water. Thereafter, water was added with the addition of cement (together with fly ash) and mixed thoroughly for 3 min. Fibers were added finally and mixed for another 3 mins. For each mix, a total of 12 specimens comprising of cubes of 100 (length) × 100 (breadth) × 100 mm (height) were cast in steel molds. Three samples for each fiber dosages of 0, 4, 5 and 6 kg/m3 were tested after exposing at room temperature (27 °C), 150, 200 and 300 °C respectively. The specimens, after removal from the steel molds at 1 day, were cured in water at 27 °C until the age of 28 days. Figure 2 shows an image taken while casting the specimens.
Concrete cubes after casting.
The cubes were tested under compression in this study. IS 456 2000 (2000) is typically used as standards for testing of cubes under compression. However, IS 456 (2000) does not have information related to displacement controlled testing and therefore ASTM C39 (2004) was used as a reference for loading protocol. Cube specimens were tested in uniaxial compression using rigid steel plates on a servo-controlled compression testing machine using displacement control (Fig. 3). After 28 days of curing, the fully saturated specimens were taken out and dried to saturated and surface dry status. Three specimens from each of the four mixes were grouped together and made into a batch. Each batch was heated in an electric oven to peak temperatures of 150, 200 and 300 °C at a definite temperature gradient of 2 °C/min and the peak temperature was maintained for 1 h approximately. The diagrammatic representation of the temperature load is shown in Fig. 4. The heated specimens were then cooled to room temperature and made ready for the compression testing. The specimens were then coated with a white paint as a primer on which a speckle pattern is created in order to obtain the strain values based on DIC technique. Displacement and load were measured through an external data acquisition system (DAQ). DIC measurements were used to calculate the strain corresponding to applied load (Fig. 3a). The effectiveness of fiber reinforcement is measured in terms of its energy dissipation capacity. It is also called as toughness index. Compressive toughness index (CTI) is defined as area under the stress–strain curves under compression, which is the energy absorbed prior to complete failure of specimen as shown in Fig. 5.
Test setup for compression testing of cubes. a DIC setup for strain measurement. b Compression testing machine. 1. CTM machine. 2. Controller. 3. Camera. 4. Light source. 5. DIC software.
Temperature loading gradient.
Typical stress–strain graph for fiber—reinforced concrete cylinders under compression.
Digital Image Correlation
The digital image correlation (DIC) is an optical-numerical full-field surface displacement measurement method. It is based on a comparison between two images of the specimen coated by a random speckle pattern in the undeformed and in the deformed state (Sutton et al. 2009). Its special merits encompass non-contact measurement, simple optical setup, no special preparation of specimens and no special illumination. The basic principle of the DIC method is to search for the maximum correlation between small zones (sub windows) of the specimen in the undeformed and deformed states, as illustrated in Fig. 6. From a given image-matching rule, the displacement field at different positions in the analysis region can be computed. The simplest image-matching procedure is the cross-correlation, which provides the in-plane displacement fields \( u\left( {x,y} \right) \) and \( v\left( {x,y} \right) \) by matching different zones of the two images (See Eq. (1)). Experimental setup of a 3D DIC system is shown in Fig. 7.
$$ C\left( {u,v} \right) = \frac{{\mathop \sum \nolimits_{i = 1}^{m} \mathop \sum \nolimits_{j = 1}^{m} \left[ {f\left( {x_{i} ,y_{j} } \right) - \bar{f}} \right]\left[ {g\left( {x^{\prime}_{i} ,y^{\prime}_{j} } \right) - \bar{g}} \right]}}{{\sqrt {\mathop \sum \nolimits_{i = 1}^{m} \mathop \sum \nolimits_{j = 1}^{m} \left[ {f\left( {x_{i} ,y_{j} } \right) - \bar{f}} \right]^{2} } \sqrt {\mathop \sum \nolimits_{j = 1}^{m} \mathop \sum \nolimits_{j = 1}^{m} \left[ {g\left( {x^{\prime}_{i} ,y^{\prime}_{j} } \right) - \bar{g}} \right]^{2} } }} $$
where C(u,v) is correlation coefficient which is function of translations 'u' and 'v'. The coordinates or grid points (x i , y j ) and (x i ′, y j ′) are related by the translations that occur between the two images. If the deformation is small and perpendicular to the optical axis of the camera, then the relation between (x i y) and (x′, y′) can be approximated by a 2D transformation given in Eqs. (2) and (3).
$$ x^{\prime} = x + u_{0} + \frac{\partial u}{\partial x}dx + \frac{\partial u}{\partial y}dy $$
$$ y^{\prime} = y + v_{0} + \frac{\partial v}{\partial x}dx + \frac{\partial v}{\partial y}dy $$
Schematic diagram of the deformation relation. a Undeformed state, b deformed state.
Schematic of the experimental set up for DIC strain measurement system.
Here, u 0 and v 0 are translations of the center of the sub-image in the X and Y directions, respectively. The distances from the center of the sub-image to the point (x, y) are denoted by dx and dy. Thus, the correlation coefficient C(u,v) is a function of displacement components (u, v) and displacement gradients. In Eq. 1, f(x, y) is the pixel intensity or the gray scale value at a point (x, y) in the original image and g(x, y) is the gray scale value at a point (x, y) in the translated image \( \bar{f} \) and \( \bar{g} \) are mean values of the intensity matrices f and g, respectively.
Optical full-field measurement techniques such as reflection photo elasticity, Moiré interferometry, holographic and speckle interferometry, grid method and DIC are found very promising for the experimental stress/strain analysis of materials and structures (Rastogi 2000; Surrel 2004; Grédiac 2004; Robert et al. 2007). All the interferometric techniques require stringent system stability and are very sensitive to vibration. However, DIC technique is easy to use, involves simple optics, less sensitive to vibration, no heavy surface preparation, reliable and it can be applied on any class of material. Moreover, it is truly a whole field noncontact measurement method. For above reasons, DIC is now becoming more popular and most widely employed for surface displacement and strain measurement in experimental mechanics.
Although the interferometric metrologies for deformation measurements are able to provide high-sensitivity and real-time representation of the deformation field through live fringes, they do have some inherent limitations. In particular, they are very sensitive to vibration and in situ measurement could always pose a problem. Also, the measurement results are often presented in the form of fringe patterns, therefore additional fringe and phase analysis techniques are required to recover the desired physical quantities from the fringe patterns. Since the recording process is generally nonlinear, resulting in difficulties in extracting partial fringe positions with high accuracy. But in the case of DIC one needs only two images: first one is reference and later acquired at deformed state relatively. Using these two images applying the correlation or doing pattern matching straight away one can get displacement field. It is an added advantage while doing these kind of in situ measurements especially during continuous loading. One has to do numerical differentiation of the displacement field to get the strain field and it involves many intermediate steps. When a specimen is loaded its surface image deforms accordingly. Two different images represent different loading stages, one is the initial reference image and another is the deformed image. Calculating the transformation parameters for images under different loading conditions, both the displacement vector and deformation for each facet can be determined. In order to make the specimens reveal unique image patterns to apply correlation matching, usually black paint is lightly sprayed on white painted surfaces of the specimens. Figure 8a, b shows all the pre-processed test specimens and typical speckle pattern respectively.
a Pre-processed test specimens. b Typical Speckle pattern for DIC strain measurement.
A Servo controlled hydraulic compression testing machine was used in this study. Testing was done in a displacement controlled mode at a slow rate of 0.01 mm/s. Artificial random speckle pattern over the specimen surface is generated manually. The experimental setup comprises of a DIC system from Correlations Solutions, Inc. It consists of a grasshopper CCD camera (Point Crey-Grass-5055M-C) having a spatial resolution of 2448 × 2048 pixel. Two LED light sources are used to get an adequate image contrast. The camera was mounted on a tripod in front of the specimen (Fig. 3) and was connected to a laptop for image acquisition. Ten images per second are grabbed using VIC Snap software from Correlated Solutions Inc. Post-processing of captured images was carried out using VIC-2D software from Correlated Solutions Inc. Both the image acquisition and load cell output are synchronized using a National Instrument (NI) data acquisition card (DAC).
The specimens were coated with a random speckle pattern to monitor the in plane surface strains. The load and the displacement data were recorded throughout the testing. The load on the specimens was continuously increased until it dropped to 30 % of the peak load. The error in DIC measurement could arise due to many sources such as illumination variations, quality of the acquisition system, camera lens distortion, image noise, or it could be due to the error associated with the implementation of correlation algorithm like subset size, step size, strain window size, sub-pixel optimization algorithm, and sub-pixel intensity interpolation scheme. However, there are several parameters like subset size, step size and strain window size, which can influence the accuracy of measurements. The effect of some of these parameters has been investigated by carrying out a sensitivity analysis. The resolution of region of interest (ROI) is kept at 119 × 224 pixels corresponding to 10.5 mm × 19.5 mm on physical scale. The spatial resolution is 11.35 pixel/mm. The average speckle size is 2.8 pixels. A subset size of 37 × 37 pixels along with a step size of 7 pixels is chosen for performing the DIC post-processing. Once the strain computation is completed, the average value of each strain component from every strain map corresponding to each image grabbed during the test is extracted to generate the complete stress–strain curve.
Residual Compressive Stress–Strain Behaviour
Compressive tests on each mix was carried out between 50 and 55 days from the day of casting. The obtained results are presented in Table 2. The strain values are obtained from the DIC and the loading values are obtained from the load cell values of the compression testing machine. Vic-2D software was used to correlate the strain measurements. Fiber inclusions of all types increased the compressive strength only marginally. Higher compressive strength was obtained for a specimen with higher fiber dosage (6 kg/m3), whereas the lowest value was obtained in control specimen. Previous studies by authors (Rasheed and Prakash 2015) on cellular lightweight concrete have indicated that the fiber dosage of about 4–5 kg/m3 is the optimum dosage beyond which there is not much strength increase. However, with an increase in temperature there is a decrease in the peak compressive strength. Decay in strength was not significant for specimens subjected to 150 °C. This indicated that fibers were not affected as they did not reach their glass transition temperature of 160 °C. At higher temperature (300 °C), strength decay was observed, indicating that there was no contribution from fibers. Slight color change from grey to pale white has occurred to the specimens subjected to 200 and 300 °C indicating the initiation of chemical reactions with in the concrete matrix. However, the change in chemical reactions cannot be captured as there was no thermocouple in the center of the concrete samples. However, the interior of the sample is expected to experience a lower temperature and a part of the fibers could still be intact and could contribute to the residual strength.
Table 2 Average peak compressive strength and compressive toughness index (CTI).
Stress–strain plots of compressive load testing of the specimens exposed to different temperatures are shown in Figs. 9 and 10. Out of three samples tested, test result of sample which replicated the average behavior is used for comparison of behavior with respect to different temperature and fiber dosages. It is worth mentioning that small levels of inconsistency in the stress–strain behavior between samples was observed and could be due to smaller size (100 mm) of the cubes considered in this study. Due to smaller size of specimen, displacement transducers were not used and only DIC results were used for calculating the strains. Figures 9 and 10 shows the pattern of the stress–strain curves for different temperature exposures. It is observed that with an increase in the fiber dosage, there is a slight increase in the compressive strength at room temperature conditions (Fig. 9a). The improvements in strength is mainly due to the interaction of fibers with the advancing internal cracks under compression.
a Stress versus strain at room temperature for different fiber dosages. b Stress versus strain at 150 °C for different fiber dosages.
a Stress versus strain at 200 °C for different fiber dosages. b Stress versus strain at 300 °C for different fiber dosages.
Concrete develops lateral tension at higher compressive loading due to Poisson's effect, thus initiating cracks at micro levels. These cracks keep growing with increase in applied load. As the advancing crack approaches the fiber, the debonding at the fiber–matrix interface occurs due to the tensile stresses perpendicular to the expected path of the advancing crack. The blunting process reduces the crack-tip stress concentration, thus blocking the forward propagation of the crack and even diverting the path of the crack. The blunting, blocking, and even diverting of the crack allows the fiber reinforced concrete to withstand additional compressive load, thus upgrading its compressive strength over the control specimen with no fibers.
Compressive strength of structural fiber reinforced specimen were higher for concrete cubes tested at high temperature (Fig. 10). With an increase in fiber dosage, there is a slight improvement in the initial stiffness and post peak stiffness at all temperatures (Figs. 9 and 10). This may be attributed to the pseudo autoclaving that is happening during the test procedure. Addition of fibers helps to reduce post-peak degradation in stiffness with increase in the fiber dosage. After heating, below 300 °C, the fibers have an uncracking effect by allowing the dissipation of fluid over-pressure in the matrix. The fiber samples have greater stiffness compared to that of the non-fibred ones with an increasing gap until a temperature of 300 °C. Up to 300 °C considered in the study, macro-polypropylene reinforced cubes gives a pseudo-ductile behavior with an improved strength, stiffness and toughness than specimen without fibers. Thus, the safety of concrete structures submitted to temperature is increased. By melting at about 200 °C the polypropylene fibers are considered to create a porosity which allows a limitation of the pressure due to the evaporation of water treatment and thus a limitation of cracking. Therefore, the spalling of concrete was not observed for all specimens considered in this study. Moreover, the fiber decomposition temperature was about 360 °C and the fibers were effective in restraining the cracking with reduced efficiency until temperature of 300 °C. The typical failure modes of cubes with different fiber dosages at room temperature and at 300 °C is shown in Fig. 11. Fiber reinforced specimen exhibited more distributed cracking at room temperature as well as at 300 °C.
Failure modes of cubes at different temperature with fiber dosages.
The effect of fiber dosages at different temperatures on failure mode is shown in Fig. 12. The effect of temperature exposure is shown in Fig. 12a for specimen with no fibers and in Fig. 12b for specimen with high fiber dosage of 6 kg/m3. Figure 12a shows that both the strength and stiffness is reduced for specimen exposed to a temperature of 300 °C. Moreover, the strain corresponding to peak stress also increased indicating the loss of stiffness. It can be inferred that the addition of high fiber dosage (6 kg/m3) helped to recover the stiffness lost at all temperature exposures (Fig. 12b) when compared to specimen with no fibers. Residual strength increased due to addition of fibers at all temperatures.
a Stress versus strain with no fibers for different temperature exposure. b Stress versus strain with 6 kg/m3 fibers for different temperature exposure.
Variation of residual strength after temperature exposure with respect to fiber dosage is presented in Fig. 13a. The efficiency of fibers in restoring the strength loss is higher at higher temperature. Almost 25 % strength was restored by 6 kg/m3 fiber dosage for at temperature exposure of 300 °C. Variation of residual strength with respect to temperature for different fiber dosage is explained in Fig. 13b. Higher dosage of fibers is found to restore the strength and stiffness loss at all temperature exposure though the beneficial effect is more pronounced at higher temperature exposure. Due to time and resource constraints, the current research was confined to compressive stress–strain behavior on cube specimens exposed to moderate temperature levels up to 300 °C. Future research shall focus on the influence of fiber reinforcement on flexural strength, tensile strength, fresh concrete rheology and shrinkage cracking and residual properties which are more representative of fire-accidents. Idea of hybrid reinforcements with carbon and steel fibers at various dosages can also be explored.
a Residual strength versus fiber dosage at different temperature exposure. b Residual strength versus temperature at different fiber dosages.
Structural polypropylene fibers evaluated in the test program shows a good potential for improving the post-peak residual strength and toughness for temperature exposures up to 300 °C. Though the fibers started melting about 200 °C, it partially helped in reducing the drop of load resistance soon-after cracking and contributed to improving the post peak behavior at all temperature exposures. The cross sections of the failed specimen was examined and even distribution of fibers across the cross section was noted. Based on the limited test results, the following conclusions can be derived:
There is a marginal increase in the compressive strength at room temperature with an increase in the fiber dosage. However, this increase in strength reduced at higher fiber dosages. The trend holds up even at high temperatures exposure up to 300 °C.
Compressive strength reduced up to 22 and 13 % for temperature exposure of 300 and 200 °C respectively. There is no strength decrease for a temperature exposure of 150 °C. Stiffness reduced more than strength at temperatures higher than 200 °C.
The compressive strength and stiffness was recovered with increase in fiber dosage at all temperature exposures. Moreover, addition of macro fibers is found to reduce post-peak degradation in stiffness with increase in the fiber dosage.
Compressive toughness was calculated as ratio of area under stress–strain curve of concrete to compressive strength. Compressive toughness increased significantly with respect to fiber dosage both at room temperature and at higher temperatures.
DIC technology was successfully employed to strain measurement for concrete exposed to moderate temperatures. Stress–strain curves were established from the strain measurements for all the tested specimen. DIC can be used to study the damage mechanisms of concrete under compression with and without fiber reinforcement.
Ahmed, S., & Imran, A. (2006). A study on properties of polypropylene fiber reinforced concrete. In: Proceedings of the 31st conference on Our World in Concrete & Structures, Singapore.
Alberti, M. G., Enfedaque, A., & Gálvez, J. C. (2014). On the mechanical properties and fracture behavior of polyolefin fiber-reinforced self-compacting concrete. Construction and Building Materials, 55, 274–288.
Aslani, F., & Nejadi, S. (2013). Self-compacting concrete incorporating steel and polypropylene fibers: Compressive and tensile strengths, moduli of elasticity and rupture, compressive stress–strain curve, and energy dissipated under compression. Composites Part B Engineering, 53, 121–133.
ASTM C 39/C39 M-04. (2004). Standard test method for compressive strength of cylindrical concrete specimens. West Conshohocken, PA: Annual Book ASTM Standards.
Barros, J. A., & Figueiras, J. A. (1999). Flexural behavior of SFRC: Testing and modeling. Journal of Materials in Civil Engineering, 11(4), 331–339.
Bentur, A., & Mindess, S. (1990). Fiber reinforced cementitious composites. London, UK: Elsevier.
Cheng, F. P., Kodur, V. K. R., & Wang, T. C. (2004). Stress-strain curves for high strength concrete at elevated temperatures. Journal of Materials in Civil Engineering, 16(1), 84–90.
Grédiac, M. (2004). The use of full-field measurement methods in composite material characterization: interest and limitations. Composites Part A, 2004(35), 751–761.
Horiguchi, T. (2005). Combination of synthetic and steel fibres reinforcement for fire resistance of high strength concrete. In: MichaelP (Ed.) Proceedings of Central European Congress on Concrete Engineering, 8–9 September 2005, Graz, pp. 59–64.
Hughes, B. P., & Fattuhi, N. I. (1976). Improving the toughness of high strength cement paste with fiber reinforcement. Composite, 7(4), 185–188.
IS 10262. (2009). Concrete mix-properotioning guidelines. New Delhi: Buerau of Indian Standards.
IS: 456. (2000). Plain and reinforced concrete-code of practice (fourth revision). New Delhi, India: Bureau of Indian Standards.
Li, V. (2002). Large volume, high-performance applications of fibers in civil engineering. Journal of Applied Polymer Science, 83, 660–686.
Mindess, S., & Vondran, G. (1988). Properties of concrete reinforced with fibrillated polypropylene fibres under impact loading. Cement and Concrete Research, 18(1), 109–115.
Mobasher, B., & Li, C. Y. (1996). Mechanical properties of hybrid cement-based composites. ACI Materials Journal, 93(3), 284–293.
Noumowe, A. (2005). Mechanical properties and microstructure of high strength concrete containing polypropylene fibres exposed to temperatures up to 200 °C. Cement and Concrete Research, 35(11), 2192–2198.
Olivito, R. S., & Zuccarello, F. A. (2010). An experimental study on the tensile strength of steel fiber reinforced concrete. Composites Part B Engineering, 41(3), 246–255.
Orteu, J.-J., Cutard, T., Garcia, D., Cailleux, E., & Robert, L. (2007). Application of stereovision to the mechanical characterisation of ceramic refractories reinforced with metallic fibres. Strain, 43(2), 1–13.
Poon, C. (2004). Performance concrete subjected to elevated temperatures. Cement and Concrete Research, 34(12), 2215–2222.
Puyo-Pain, M., & Lamon, J. (2005). Determination of elastic moduli and Poisson coefficient of thin silicon-based joint using digital image correlation. Proceedings of the 29th International Conference on advanced Ceramics and Composites, 2005, Cocoa Beach, FL.
Rasheed, M. A., & Prakash, S. S. (2015). Mechanical behavior of hybrid fiber reinforced cellular light weight concrete for structural applications of masonry. Journal of Building Materials and Construction, Elsevier, 98, 631–640. doi:10.1016/j.conbuildmat.2015.08.137.
Rastogi, K. P. (2000). Photomechanics, topics in applied physics. New York, NY: Springer. 2000.
Robert, L., Nazaret, F., Cutard, T., & Orteu, J. J. (2007). Use of 3-D digital image correlation to characterize the mechanical behavior of a fiber reinforced refractory castable. Experimental Mechanics, 47(6), 761–773.
Song, P. S., Hwang, S., & Sheu, B. C. (2005). Strength properties of nylon-and polypropylene-fiber-reinforced concretes. Cement and Concrete Research, 35(8), 1546–1550.
Soroushian, P., Khan, A., & Hsu, J. W. (1992). Mechanical properties of concrete materials reinforced with polypropylene or polyethylene fibers. ACI Materials Journal, 89(6), 535–540.
Soulioti, D. V., Barkoula, N. M., Paipetis, A., & Matikas, T. E. (2011). Effects of fibre geometry and volume fraction on the flexural behaviour of steel-fibre reinforced concrete. Strain, 47(S1), 535–541.
Surrel, Y. (2004). Full-field optical methods for mechanical engineering: essential concepts to find one's way. 2nd International Conference on Composites Testing and Model Identification, 2004, Bristol, UK.
Sutton, M., Orteu, J. J., & Schreier, H. W. (2009). Image correlation for shape and deformation measurements, basic concepts, theory and applications. New York, NY: Springer.
Xiao, J., & Falkner, H. (2006). On residual strength of high-performance concrete with and without polypropylene fibers at elevated temperatures. Fire Safety Journal, 41, 115–121.
Department of Civil Engineering, Indian Institute of Technology, Hyderabad, India
G. Srikar, G. Anand & S. Suriya Prakash
G. Srikar
G. Anand
S. Suriya Prakash
Correspondence to S. Suriya Prakash.
Srikar, G., Anand, G. & Suriya Prakash, S. A Study on Residual Compression Behavior of Structural Fiber Reinforced Concrete Exposed to Moderate Temperature Using Digital Image Correlation. Int J Concr Struct Mater 10, 75–85 (2016). https://doi.org/10.1007/s40069-016-0127-x
Received: 05 March 2015
Issue Date: March 2016
DOI: https://doi.org/10.1007/s40069-016-0127-x
structural polypropylene fibers
fiber dosage
digital image correlation (DIC)
residual compressive strength
post-peak behavior | CommonCrawl |
A set $Q$ is well-quasi-ordered by a relation $\le$ if for every sequence $q_1,q_2,\ldots$ of elements of $Q$ there exist $i<j$ such that $q_i \le q_j$. In their Graph Minors series, Robertson and Seymour prove that graphs are well-quasi-ordered by the minor relation. This result, which is known as the Graph Minor Theorem, is considered one of the deepest results in graph theory and has several algorithmic applications.
Unfortunately, the same is not true for directed graphs and the relation of butterfly minor. In particular, we can easily identify two infinite antichains. We can then ask if classes of graphs that exclude these antichains are well-quasi-ordered by the butterfly minor relation. We prove that this is the case while at the same time providing a structure theorem for these graph classes.
This is joint work with Maria Chudnovsky, S-il Oum, Paul Seymour and Paul Wollan. | CommonCrawl |
\begin{document}
\title{Collision-avoiding in the singular Cucker-Smale model with nonlinear velocity couplings.} \author{Ioannis Markou} \date{March 13 2018} \maketitle
\begin{abstract} Collision avoidance is an interesting feature of the Cucker-Smale (CS) model of flocking that has been studied in many works, e.g. \cite{AhChHaLe, AgIlRi, CaChMuPe, CuDo1, CuDo2, MuPe, Pe1, Pe2}. In particular, in the case of singular interactions between agents, as is the case of the CS model with communication weights of the type $\psi(s)=s^{-\alpha}$ for $\alpha \geq 1$, it is important for showing global well-posedness of the underlying particle dynamics. In \cite{CaChMuPe}, a proof of the non-collision property for singular interactions is given in the case of the linear CS model, i.e. when the velocity coupling between agents $i,j$ is $v_{j}-v_{i}$. This paper can be seen as an extension of the analysis in \cite{CaChMuPe}. We show that particles avoid collisions even when the linear coupling in the CS system has been substituted with the nonlinear term $\Gamma(\cdot)$ introduced in \cite{HaHaKi}
(typical examples being $\Gamma(v)=v|v|^{2(\gamma -1)}$ for $\gamma \in (\frac{1}{2},\frac{3}{2})$), and prove that no collisions can happen in finite time when $\alpha \geq 1$. We also show uniform estimates for the minimum inter-particle distance, for a communication weight with expanded singularity $\psi_{\delta}(s)=(s-\delta)^{-\alpha}$, when $\alpha \geq 2\gamma$, $\delta \geq 0$.
\end{abstract}
\textbf{Keywords}: Nonlinear Cucker-Smale system, emergent behavior, collision avoidance, singular communication weight.
\textbf{2010 MR Subject Classification}: 82C22, 92D50.
\section{Introduction.}
A Cucker-Smale (CS) type of model deals with an interacting system of $N$ autonomous, self-driven particles (agents). The main model postulate is that the agents adjust their velocities by taking a weighted average of their relative velocities to all other agents. If we let $(x_{i},v_{i})\in \mathbb{R}^d \times \mathbb{R}^d$ be the phase space position of the i'th particle for $1 \leq i \leq N$, and $d\geq 1$ be the physical dimension, the dynamics of the particle motion is governed by the system:
\begin{align} \label{CS} \left\{ \begin{array}{ll} \frac{d}{dt}x_{i}(t)&=v_{i}(t),\quad i=1 \ldots, N, \quad t>0 , \\ \\ \frac{d}{dt}v_{i}(t)&=\frac{1}{N}\sum
\limits_{j} \psi(|x_{i}-x_{j}|)(v_{j}-v_{i}) \quad, \end{array} \right. \end{align} given some initial data $(x_{i}(0),v_{i}(0))=(x_{i0},v_{i0})$, $i=1,\ldots,N$. Throughout this paper, the symbol $\sum \limits_{i}$ is used as an abbreviation of $\sum \limits_{1 \leq i \leq N}$. The function $\psi(r)$, $r\geq 0$, quantifies the interaction between two agents and it is to be referred as the communication weight of the interaction. It is positive, nonincreasing, and vanishes as $r \to \infty$. We observe that in our model, $\psi(\cdot)$ depends on the metric distance between two agents. The main question that arises in the study of \eqref{CS} is whether the system emerges to a \textit{flock}, i.e., all the particle velocities align asymptotically in time and the agents stay connected forever. The prototype example in the CS model was $\psi(r)=(1+r^{2})^{-\beta}$, for $\beta \geq 0$. In \cite{CuSm1,CuSm2,HaLiu} it was shown that flocking is guaranteed if $\beta \leq \frac{1}{2}$, and if $\beta > \frac{1}{2}$ then the system might converge to a flock only under certain conditions on the initial positions and velocities. The phase transition that happens when $\beta=\frac{1}{2}$ is typical of the system \eqref{CS} and supports the more general result that when weight $\psi(\cdot)$ has a non integrable tail (i.e. $\int^{\infty}\psi(s)ds=\infty$) then flocking occurs regardless of the initial configuration of agents.
After its introduction in \cite{CuSm1,CuSm2} (based on an earlier idea from \cite{ViCzBJco}), research on the CS model took several different routes. The original flocking results were simplified and improved in \cite{HaLiu,HaTad}. The CS system was studied in the presence of Rayleigh friction forces in \cite{HaHaKim}, as well as other repulsion/alignment/turning forces \cite{AgIlRi}. The effect of a flock leader in emergent behavior was considered in \cite{Sh}. The model was also studied with extra random noise terms in \cite{CuMo,HaLeLe,ToLiYa}. S. Motsch and E. Tadmor proposed a model that resolves some of the drawbacks of the CS system by normalizing the communication weights in \cite{MoTad1} and established flocking conditions. The CS system was studied with delay terms in \cite{ChHa,ErHaSu}.
An interesting variation to system \eqref{CS} was proposed in \cite{HaHaKi} and describes the particle system where the linear coupling term $v_{j}-v_{i}$ is substituted by a nonlinear vector $\Gamma(v_{j}-v_{i}):\mathbb{R}^d \to \mathbb{R}^d$, i.e. \begin{align} \label{NL-CS}\left\{ \begin{array}{ll} \frac{d}{dt}x_{i}(t)&=v_{i}(t),\quad i=1 \ldots, N, \quad t>0 , \\ \\ \frac{d}{dt}v_{i}(t)&=\frac{1}{N}\sum
\limits_{j} \psi(|x_{i}-x_{j}|)\Gamma(v_{j}-v_{i}) . \end{array} \right. \end{align} The justification given in \cite{HaHaKi} of this nonlinear version of \eqref{CS} lies in the fact that there seems to be no underlying physical principle that requires most alignment models to be linear, other than a modeling convenience. It is therefore of paramount importance to know that model \eqref{CS} is robust under small variations in all parameters, including the velocity couplings. We refer to system \eqref{NL-CS} from now on as NL CS (nonlinear CS). The continuous coupling vector $\Gamma(v_{j}-v_{i})$ that appears in \eqref{NL-CS} has the following properties: \begin{itemize} \item (A1) (skew symmetry) $\Gamma(-v)=-\Gamma(v)$ for $v \in \mathbb{R}^d$. \item (A2) (coercivity) There exists some $C_{1}>0$ and
$\gamma \in (\frac{1}{2},\frac{3}{2})$ such that $\langle \Gamma(v), v\rangle \geq C_{1} |v|^{2 \gamma}$. Here by $\langle \cdot , \cdot \rangle$ we denote the inner product in $\mathbb{R}^d$ and with
$|\cdot|$ its induced norm. \end{itemize} System \eqref{NL-CS} exhibits similar phase transition properties as \eqref{CS}. For a communication weight with a non integrable tail and under assumptions $(A1)-(A2)$, it can be shown that flocking occurs for $\gamma \in (\frac{1}{2},\frac{3}{2})$ with a rate that depends explicitly on the value of $\gamma$. In more detail, when $\gamma \in (\frac{1}{2},1)$ we have flocking that occurs in finite time $T^{*}<\infty$, algebraically fast. If $\gamma \in (1,\frac{3}{2})$ we have emergence of a flock in infinite time $T^{*}=\infty$, with algebraic decay rate. Finally, the case $\gamma=1$ reduces to the linear case with $T^{*}=\infty$, and flocking that happens at an exponential decay rate.
A question that requires special investigation is the presence of collisions between agents. The original CS systems \eqref{CS} and \eqref{NL-CS} with the weights we just mentioned does not exclude the possibility of collisions. For obvious reasons, the modeling of alignment in animal flocks, aerial vehicles, unmanned drones etc. should in many cases incorporate a mechanism for avoiding collisions. The way to design systems like that is by introducing an interaction that becomes singular when two particles collide. This interaction might have the form of an extra repulsion forcing term, like in \cite{AhChHaLe, AgIlRi, CuDo1, CuDo2}, or it might simply be a communication weight that is singular at the origin, e.g. \cite{CaChMuPe, MuPe, Pe1, Pe2}. In this work our focus shifts to the latter scenario.
The purpose of this article is to study the problem of collision avoidance for the NL CS model \eqref{NL-CS}. We prove that for all the cases where flocking is possible, we have the absence of collisions in finite time for any interaction of the type $\psi(s)=s^{-\alpha}$, with $\alpha \geq 1$. This result is in complete agreement with the linear case treated in \cite{CaChMuPe}. Our approach shows that the methodology used in \cite{CaChMuPe} is not just specific to the linear case but can be easily adopted to a non linear scenario. It also serves as a further indication of the robustness of the choice of linear couplings in the classical CS model.
Furthermore, we derive uniform estimates for the general case of a weight with expanded singularity $\psi(s)=(s-\delta)^{-\alpha}$, $\delta \geq 0$. We use distance functions of the type
$\mathcal{L}^{\beta}(t)=\frac{1}{N(N-1)}\sum \limits_{i\neq j}(|x_{i}(t)-x_{j}(t)|-\delta)^{-\beta}$ for some appropriately chosen $\beta=\beta(\alpha, \gamma) >0$, and prove that $\mathcal{L}^{\beta}(t)\leq O(T)$, $\forall t \in[0,T]$, when
$\alpha \geq 2 \gamma$. This gives an estimate for the minimum interparticle distance like $\inf \limits_{i \neq j}|x_{i}(t)-x_{j}(t)|>O((T N^2)^{-\frac{1}{\beta}})$, which is enough to conclude the well-posedness of the dynamics for a fixed number $N$ of agents (given that $\Gamma(\cdot)$ is also Lipshitz). Unfortunately, this estimate is useless as $N\to \infty$ and leaves the question of passage to the mean field equation (for singular communication weights) still open, see e.g. \cite{HaJa,Ja}.
The rest of this paper is structured as follows. In Section 2 we briefly review the theory for problem \eqref{NL-CS} and the flocking result presented in \cite{HaHaKi}. In the end of Section 2 we present the main result of this paper and give its proof in Section 3. Finally, in Section 4 we give and prove the uniform estimates in the case of a communication weight $\psi(s)=(s-\delta)^{-\alpha}$, for $\alpha \geq 2\gamma$ and $\delta \geq 0$.
\section{Preliminaries and Main result.}
In what follows, we denote with $x(t)$, $v(t)$ the position and velocity of the whole $N$ particle system, i.e. $x(t):=(x_{1},\ldots,x_{N})$ and $v(t):=(v_{1},\ldots,v_{N})$. We also denote with $(x(t),v(t))$ a solution to the CS system if $x(t)$, $v(t)$ solve the NL system at time $t$. In the same spirit, the notation $(x_{0},v_{0}):=(x_{10},\ldots,x_{N0},v_{10},\ldots,v_{N0})$ represents the vector of initial data. We now give a formal definition of flocking for a particle system $(x(t),v(t))$. \begin{definition}[Asymptotic flocking.] A given particle system $(x(t),v(t))$ is said to converge to a flock, iff the following two conditions hold,
\begin{align} \label{flock} \sup_{t>0} \sup_{i,j}|x_{i}(t)-x_{j}(t)|<\infty , \qquad
\lim \limits_{t\to \infty}\sup_{i,j}|v_{i}(t)-v_{j}(t)| =0 .\end{align} \end{definition} We need to keep in mind that the definition we just gave is independent of the configuration of initial velocities and positions $(x_{0},v_{0})$. This definition corresponds to the so called \textit{unconditional} flocking scenario. If flocking holds for a certain class of initial configurations then we speak of \textit{conditional} flocking.
Now that we stated the definition of flocking, we may proceed with the invariants of particle dynamics for system \eqref{NL-CS}. For this, we define the first three moments of particle motion, \begin{equation} \label{mom-def} m_{0}(t):=\sum \limits_{i}1, \quad m_{1}(t)=\sum \limits_{i} v_{i}, \quad m_{2}(t):=\sum
\limits_{i}|v_{i}|^2 .\end{equation} The following lemma shows how these moments propagate in time.
\begin{lemma}[propagation of moments (see \cite{HaHaKi})] Assume that the conditions $(A1)$-$(A2)$ hold. Suppose also that $(x(t),v(t))$ is a solution to the NL CS system. Then, the three velocity moments satisfy \begin{equation}\label{mom-eq} \frac{d}{dt}m_{0}(t)=\frac{d}{dt}m_{1}(t)=0, \qquad \frac{d}{dt}m_{2}(t)\leq -\frac{C_{1}}{N}\sum
\limits_{i,j}\psi(|x_{i}-x_{j}|)|v_{i}-v_{j}|^{2\gamma} . \end{equation} \end{lemma}
\begin{proof} We give the short proof for completion. The first equation is trivial since $m_{0}(t)=N$. The equation for $m_{1}(t)$ follows from the symmetry of $\psi(\cdot)$ and $(A1)$, \begin{align*}\dot{m}_{1}(t)&=\sum \limits_{i}\dot{v}_{i}(t)=\frac{1}{N}\sum
\limits_{i,j}\psi(|x_{i}-x_{j}|)\Gamma(v_{j}-v_{i})\stackrel{i \leftrightarrow j}{=} \frac{1}{N}\sum
\limits_{i,j}\psi(|x_{i}-x_{j}|)\Gamma(v_{i}-v_{j})\\ &\stackrel{(A1)}{=}-\frac{1}{N}\sum
\limits_{i,j}\psi(|x_{i}-x_{j}|)\Gamma(v_{j}-v_{i})=0, \end{align*} where $\sum \limits_{i,j}=\sum \limits_{i} \sum \limits_{j}$. For the second moment $m_{2}(t)$ we have
\begin{align*} \dot{m}_{2}(t)&=\frac{d}{dt}\sum \limits_{i}|v_{i}(t)|^2 =2\sum \limits_{i}\langle \dot{v}_{i},v_{i}\rangle =\frac{2}{N}\sum
\limits_{i,j}\psi(|x_{i}-x_{j}|)\langle
\Gamma(v_{j}-v_{i}),v_{i}\rangle \\& \stackrel{i \leftrightarrow j}{=}\frac{2}{N}\sum \limits_{i,j}\psi(|x_{i}-x_{j}|)\langle \Gamma(v_{i}-v_{j}),v_{j}\rangle \stackrel{(A1)}{=}- \frac{2}{N}\sum
\limits_{i,j}\psi(|x_{i}-x_{j}|)\langle \Gamma(v_{j}-v_{i}),v_{j}\rangle\\&=-\frac{1}{N}\sum
\limits_{i,j}\psi(|x_{i}-x_{j}|)\langle \Gamma(v_{j}-v_{i}),v_{j}-v_{i}\rangle \stackrel{(A2)}{\leq} -\frac{C_{1}}{N}\sum
\limits_{i,j}\psi(|x_{i}-x_{j}|)|v_{i}-v_{j}|^{2\gamma}. \end{align*} \end{proof} A direct consequence of the invariance of the first moment is that the bulk velocity $v_{c}(t):=\frac{1}{N} \sum \limits_{i}v_{i}$ remains constant in time, i.e., $v_{c}(t)=v_{c}(0)$. For the mean position vector $x_{c}(t):=\frac{1}{N} \sum \limits_{i}x_{i}$, we easily see that $x_{c}(t)=v_{c}(0)t+x_{c}(0)$. Based on these observations we can define the standard deviation of positions and velocities for a group of $N$ agents by \begin{align*} \sigma_{x}(t):=\sqrt{\frac{1}{N}\sum
\limits_{i}|x_{i}(t)-x_{c}(t)|^2}, \qquad \sigma_{v}(t):=\sqrt{\frac{1}{N}\sum
\limits_{i}|v_{i}(t)-v_{c}(t)|^2}, \end{align*} and use them to study the flocking behavior in \eqref{NL-CS}. Indeed, the flocking conditions in definition \eqref{flock} are equivalent to showing $\sup \limits_{t>0} \sigma_{x}(t)<\infty$ and $\lim \limits_{t \to \infty}\sigma_{v}(t)=0$. The main flocking result for the NL CS system was given in \cite{HaHaKi}. \begin{proposition} We assume that assumptions $(A1)$-$(A2)$ hold and that $(x(t),v(t))$ is a smooth solution to the NL CS system \eqref{NL-CS} with the following constraint on initial configurations \begin{equation} \label{flock-cond} \sigma_{v}^{3-2\gamma}(0) \leq C_{2} (3-2\gamma) \int_{\sigma_{x}(0)}^{\infty}\psi(2\sqrt{N}s)\, ds , \end{equation} for some $C_{2}=C_{2}(\gamma, C_{1}, N)>0$. Then, for $\gamma \in (\frac{1}{2},\frac{3}{4})$ the particle system emerges to a flock. \end{proposition}
The proof of Proposition 1 is actually pretty straightforward and a brief sketch of it is possible in few lines. First, one easily shows that the inequality for the dissipation of $\sigma_{v}(t)$ is \begin{equation} \label{dis-sigma}\frac{d}{dt}\sigma_{v}(t) \leq -C_{2}\psi(2\sqrt{N}\sigma_{x}(t)) \sigma_{v}^{2\gamma -1} (t).\end{equation} Inequality \eqref{dis-sigma} would be enough to ensure the convergence $\sigma_{v}(t)\to 0$, as long as we had a uniform bound on $\sigma_{x}(t)$ (some $\sigma_{x}^{\infty}<\infty$ such that $\sup \limits_{t>0} \sigma_{x}(t)\leq \sigma_{x}^{\infty}$). For this, we may use the Lyapunov functionals \begin{equation*} \mathcal{E}^{\pm}(t):=\frac{\sigma_{v}^{3-2\gamma}(t)}{3-2\gamma} \pm C_{2}\int_{0}^{\sigma_{x}(t)}\psi(2\sqrt{N}s)\, ds,\end{equation*} and show (with the help of \eqref{dis-sigma}) that $\mathcal{E}^{\pm}(t)$ are dissipative ($\mathcal{E}^{\pm}(t)\leq \mathcal{E}^{\pm}(0)$ for $t\geq 0$). Finally, using condition \eqref{flock-cond} and the dissipation of $\mathcal{E}^{\pm}(t)$ it can be shown that $\sup \limits_{t>0} \sigma_{x}(t)\leq \sigma_{x}^{\infty}<\infty$ which concludes with the proof.
\begin{remark} It might appear that the flocking condition \eqref{flock-cond} in Proposition 1 is an unnecessary restriction but it is consistent with the phase transition character in the classical CS model \eqref{CS}. This condition is satisfied trivially in the case of long range interactions with $\int^{\infty}\psi(s)\, ds = \infty$, giving unconditional flocking when the communication weight $\psi(\cdot)$ has a heavy tail. When the interaction between agents has a short range, the emergence of a flock is only conditionally possible. \end{remark}
We note that in the statement of Proposition 1, flocking is proven for smooth solutions to system \eqref{NL-CS}. The well-posedeness of system \eqref{NL-CS} is of course a separate problem and everything depends on the regularity of interaction $\psi(\cdot)$ and coupling $\Gamma(\cdot)$. Naturally, local well-posedeness can be proven for more singular interactions between agents and velocity couplings (see e.g. \cite{CaChHa}). For global results, the Lipshitz property for both $\Gamma(\cdot)$ and $\psi(\cdot)$ is necessary. The Lipshitz condition on $\Gamma(\cdot)$ for $\gamma \in [1,\frac{3}{2})$ is a natural one, since the prototype example is
$\Gamma(v)=v|v|^{2(\gamma -1)}$, and hence this assumption is made in our main result. On the other hand, for $\gamma \in (\frac{1}{2},1)$ we may have non-uniqueness even for regular communication weights. Our result gives a definite answer to the existence of smooth solutions for $\gamma \in [1,\frac{3}{2})$ when $\psi(s)=s^{-\alpha}$, $\alpha \geq 1$. For $\gamma \in (\frac{1}{2},1)$ uniqueness is possible depending on the choice of initial data and this problem remains open.
We now give the main result that we prove in the next section. \begin{theorem} Consider the CS system \eqref{NL-CS} with $\gamma \in (\frac{1}{2},\frac{3}{2})$ and initial data $(x_{0},v_{0})$ that satisfy \begin{equation*} x_{i0}\neq x_{j0}\qquad \text{for} \quad i \neq j .\end{equation*} We consider the communication weight $\psi(s)=s^{-\alpha}$, with $\alpha \geq 1$. Furthermore, if $\gamma \in [1,\frac{3}{2})$ we assume that $\Gamma(\cdot)$ is Lipshitz continuous. Then for any solution of the NL CS system the particle trajectories remain non-collisional for $t>0$. \end{theorem}
The following easy lemma will prove helpful. \begin{lemma} Given $p>0$, then for any $q>0$ there exists a constant
$C_{pq}:=C(p,q)>0$ such that \begin{equation*}|a^{-p}-b^{-p}|\geq C_{pq}|a^q -b^q| \quad \text{for} \quad 0<a,b<1 .\end{equation*} \end{lemma}
\begin{proof} This is an exercise in calculus. For $a=b$ it holds trivially. If $a \neq b$, we set $x=a^q$, $y=b^q$ and we consider the function $f(x,y)=\frac{y^{-p/q}-x^{-p/q}}{x-y}$ on the triangle $0<y<x<1$. We can show that the function has a positive lower bound $C_{pq}>0$ for any pair $p,q >0$. \end{proof}
\section{Collision-avoiding for singular interactions.}
\begin{proof} The idea of the proof follows closely the steps in \cite{CaChMuPe} in its first part. We assume that at some finite time $t_{C}>0$ the first collision between a group of particle happens. Then, based on this assumption and estimates that we derive for the dynamics of the group of particles that collide, we reach a contradiction. We denote the group of particles that collide at time
$t_{C}$ with $C$, and their number by $|C|$ i.e.
\begin{align*} &|x_{i}(t)-x_{j}(t)| \to 0 \quad \text{as}\quad t \nearrow t_{C} \quad \text{for} \quad (i,j) \in C^2:=C \times C
\\ & |x_{i}(t)-x_{j}(t)|\geq \delta >0 \quad \text{for some} \quad \delta >0, \quad (i,j)\not \in C^2, \quad t \in [0,t_{C}]. \end{align*}
We define the position and velocity fluctuation for the particles in the collisional group by
\begin{equation*} \|x\|_{C}(t):=\sqrt{\sum \limits_{(i,j)
\in C^2}|x_{i}(t)-x_{j}(t)|^2} \quad \text{and} \quad
\|v\|_{C}(t):=\sqrt{\sum \limits_{(i,j) \in C^2}|v_{i}(t)-v_{j}(t)|^2} .\end{equation*} Here $\sum
\limits_{(i,j)\in C^2}$ is the sum over all pairs $(i,j)$ where both indices are members of group $C$. According to the definition we just gave, we have that $\|x\|_{C}(t)\to 0$ as $t \nearrow t_{C}$. We also have the following uniform bounds for $\|x\|_{C}(t)$ and
$\|v\|_{C}(t)$ as a result of the particle dynamics. There exist
$M>0$ and $R=R(t_{C})>0$, such that for all $t \in [0,t_{C}]$ we have \begin{equation*} \|v\|_{C}(t)\leq M := \sqrt{2}|C| \, \sup
\limits_{i}|v_{i0}|, \quad \|x\|_{C}(t)\leq R := \sqrt{2}|C| \,(\sup
\limits_{i}|x_{i0}| +\sup \limits_{i}|v_{i0}|t_{C}).\end{equation*}
It is easy to show using the definition of $\|x\|_{C}(t)$ that
\begin{equation} \label{EqMot} \left| \frac{d}{dt}
\|x\|_{C}(t)\right| \leq \|v\|_{C}(t) . \end{equation} Our plan is to show a sharp inequality for the dissipation of $\|v\|_{C}(t)$ in the spirit of \cite{CaChMuPe}. In more detail, we show that: \begin{itemize} \item If $\frac{1}{2}<\gamma <1$,
\begin{equation} \label{In1} \frac{d}{dt}\|v\|^{2}_{C}(t)
\leq -2c_{0}\psi(\|x\|_{C}(t))\|v\|_{C}^{2 \gamma}(t)
+2c_{1}\|x\|_{C}(t)\|v\|_{C}(t) +2c_{2}\|v\|_{C}(t) .\end{equation} \item If $1\leq \gamma <\frac{3}{2}$,
\begin{equation} \label{In2} \frac{d}{dt}\|v\|^{2}_{C}(t)
\leq -2c_{0}\psi(\|x\|_{C}(t))\|v\|_{C}^{2 \gamma}(t)
+2c_{1}\|x\|_{C}(t)\|v\|_{C}(t) +2c_{2}\|v\|^{2}_{C}(t) .\end{equation} \end{itemize}
For the derivation of \eqref{In1}-\eqref{In2} we compute the time evolution of $\|v\|_{C}(t)$, i.e.
\begin{align*} &\frac{d}{dt}\|v\|_{C}^2 =2\sum \limits_{(i,j)\in C^2} \left \langle v_{i}-v_{j},\frac{1}{N}\sum
\limits_{k}\psi(|x_{k}-x_{i}|)\Gamma(v_{k}-v_{i})-\frac{1}{N} \sum
\limits_{k}\psi(|x_{k}-x_{j}|)\Gamma(v_{k}-v_{j})\right \rangle \\
\\ &=\frac{2}{N}\left( \sum_{\substack{(i,j)\in C^2 \\ k \in C}} +\sum_{\substack{(i,j)\in C^2 \\ k \not \in C}} \right) \left( \psi(|x_{k}-x_{i}|)\left \langle v_{i}-v_{j},\Gamma(v_{k}-v_{i}) \right \rangle -
\psi(|x_{k}-x_{j}|)\left \langle v_{i}-v_{j},\Gamma(v_{k}-v_{j})\right \rangle \right) \\&=:J_{1}+J_{2} . \end{align*} The computation for the first term $J_{1}$ gives \begin{align*} J_{1}&=\frac{2}{N} \sum_{\substack{(i,j)\in C^2 \\ k \in C}}
\left( \psi(|x_{k}-x_{i}|)\langle v_{i}-v_{j}, \Gamma(v_{k}-v_{i})
\rangle - \psi(|x_{k}-x_{j}|)\langle v_{i}-v_{j}, \Gamma(v_{k}-v_{j})\rangle \right) \\ &\stackrel{i \leftrightarrow j}{=}\frac{4}{N} \sum_{\substack{(i,j)\in C^2 \\ k \in C}}
\psi(|x_{k}-x_{i}|)\langle v_{i}-v_{j}, \Gamma(v_{k}-v_{i}) \rangle \\ &\stackrel{i \leftrightarrow k}{=}\frac{2}{N}\sum_{\substack{(i,j)\in C^2 \\ k \in C}}
\psi(|x_{k}-x_{i}|)\langle v_{i}-v_{j}, \Gamma(v_{k}-v_{i})\rangle + \frac{2}{N}\sum_{\substack{(i,j)\in C^2 \\ k \in C}}
\psi(|x_{k}-x_{i}|)\langle v_{k}-v_{j}, \Gamma(v_{i}-v_{k})\rangle \\ &=\frac{2}{N}\sum_{\substack{(i,j)\in C^2 \\ k \in C}}
\psi(|x_{k}-x_{i}|)\langle v_{i}-v_{k}, \Gamma(v_{k}-v_{i})\rangle =-\frac{2}{N}\sum_{\substack{(i,j)\in C^2 \\ k \in C}}
\psi(|x_{k}-x_{i}|)\langle v_{k}-v_{i}, \Gamma(v_{k}-v_{i})\rangle
\\ &\stackrel{(A2)}{\leq}-\frac{2C_{1}|C|}{N} \sum \limits_{(i,j)\in C^2}
\psi(|x_{i}-x_{j}|)|v_{i}-v_{j}|^{2 \gamma} . \end{align*}
Then, using the definition of $\|x\|_{C}$, $\|v\|_{C}$ and the monotonicity of $\psi(\cdot)$
\begin{equation*} J_{1}\leq -2c_{0}\psi (\|x\|_{C})\|v\|_{C}^{2 \gamma}
\qquad \text{for}\quad c_{0}=\frac{C_{1}|C|}{N}.\end{equation*} For $J_{2}$ we have \begin{align*} J_{2}&=\frac{2}{N} \sum_{\substack{(i,j)\in C^2\\ k \not \in C}}
\psi(|x_{k}-x_{i}|)\langle v_{i}-v_{j}, \Gamma(v_{k}-v_{i}) \rangle -\frac{2}{N}\sum_{\substack{(i,j)\in C^2 \\ k \not \in C}}
\psi(|x_{k}-x_{j}|)\langle v_{i}-v_{j}, \Gamma(v_{k}-v_{j}) \rangle \\ &=\frac{2}{N}\sum_{\substack{(i,j)\in C^2 \\ k \not \in C}}
(\psi(|x_{k}-x_{i}|)-\psi(|x_{k}-x_{j}|))\langle v_{i}-v_{j}, \Gamma(v_{k}-v_{j}) \rangle \\ &+\frac{2}{N}\sum_{\substack{(i,j)\in C^2
\\ k \not \in C}} \psi(|x_{k}-x_{i}|)\langle v_{i}-v_{j}, \Gamma(v_{k}-v_{i})-\Gamma(v_{k}-v_{j}) \rangle :=J_{21}+J_{22} .\end{align*} The first term $J_{21}$ is bounded by \begin{equation*} J_{21} \leq \frac{2}{N} \Gamma_{M} L_{\delta}\sum_{\substack{(i,j)\in C^2
\\ k \not \in C}} |x_{i}-x_{j}|
|v_{i}-v_{j}|\leq 2 c_{1}\|x\|_{C}\|v \|_{C}, \quad c_{1}=\frac{N-|C|}{N}\Gamma_{M}L_{\delta},\end{equation*} where $L_{\delta}$ is the Lipshitz constant of $\psi(\cdot)$ on the interval $(\delta, \infty)$ and $\Gamma_{M}:=\max \limits_{v}
|\Gamma(v)|$. Similarly, since $|x_{k}-x_{i}|>\delta$, it follows that $\psi(|x_{k}-x_{i}|)<\psi(\delta)$ and we have the following bounds for the second term $J_{22}$: \\ If $\frac{1}{2} <\gamma <1$, \begin{equation*} J_{22} \leq \frac{4}{N} \psi(\delta) \Gamma_{M}\sum_{\substack{(i,j)\in C^2
\\ k \not \in C}} |v_{i}-v_{j}|\leq 2 c_{2}\|v\|_{C},\quad c_{2}=\frac{2|C|(N-|C|)}{N}\psi(\delta)\Gamma_{M}, \end{equation*} and if $1 \leq \gamma <\frac{3}{2}$ (using the Lipshitz property of
$\Gamma(\cdot)$, $|\Gamma(v)-\Gamma(w)|\leq L_{\Gamma}|v-w|$) \begin{equation*} J_{22}\leq \frac{2}{N}\psi(\delta) L_{\Gamma}\sum_{\substack{(i,j)\in C^2 \\ k \not \in C}}
|v_{i}-v_{j}|^2 \leq 2c_{2}\|v\|^{2}_{C}, \quad c_{2}=\frac{N-|C|}{N}\psi(\delta) L_{\Gamma}. \end{equation*} The derivation of estimates \eqref{In1}-\eqref{In2} is complete. We mention a couple of differences with the linear case $\Gamma(v)=v$. First, the term $J_{1}$ introduces the nonlinearity which makes it impossible to use a differential Gronwall lemma given the additional terms. Also, in contrast to the linear case where $J_{22}\leq 0$, here $J_{22}$ is an extra term to be handled.
We keep in mind that for the singular weights $\psi(s)=s^{-\alpha}$ we are considering with $\alpha \geq 1$, their primitive
$\Psi(s)=\int^{s}\psi(t)dt$ is also singular at $0$. The following bound on the increase of $\Psi(\|x\|_{C}(\cdot))$ on the interval $(s,t)$ is useful.
\begin{align} \nonumber |\Psi(\|x \|_{C}(t))| &= \Big| \int_{s}^{t}
\frac{d}{d\tau}\Psi(\| x\|_{C}(\tau))\, d\tau + \Psi(\|x
\|_{C}(s))\Big|
\\ \nonumber &= \Big|\int_{s}^{t}\Psi'(\|x \|_{C}(\tau))
\frac{d}{d\tau}\|x \|_{C}(\tau)\, d\tau + \Psi(\|x \|_{C}(s))\Big|
\\ \label{PsiEst1} & \leq \int_{s}^{t}\psi(\|x \|_{C}(\tau))\|v \|_{C}(\tau)\, d\tau + |\Psi(\|x \|_{C}(s))|. \end{align} If we can show that
$\Psi(\|x\|_{C}(t_{C}))<\infty$, the singularity of $\Psi(\cdot)$ at
$0$ implies that $\|x\|_{C}(t_{C})\neq 0$ which is a contradiction to our initial hypothesis. In our study, we consider the cases $\gamma \in (\frac{1}{2},1]$ and $\gamma \in (1,\frac{3}{2})$ separately. The first is rather trivial, but the latter requires a bit of analysis.
$\bullet$ Case $\gamma \in (\frac{1}{2},1]$ : \\ The case of $\frac{1}{2}<\gamma \leq 1$ is pretty straightforward. From estimate \eqref{In1} we get directly that \begin{equation*}\int_{s}^{t_{C}}
\psi(\|x\|_{C}(\tau))\|v\|^{2\gamma -1}_{C}(\tau)\, d\tau < \infty .
\end{equation*} We have that $0 \leq 2\gamma -1 \leq 1$ which, combined with fact that $\|v\|_{C}(t)\leq M$, yields \begin{equation*} \int_{s}^{t_{C}}
\psi(\|x\|_{C}(\tau))\|v\|_{C}(\tau)\, d\tau \leq M^{2-2\gamma}\int_{s}^{t_{C}} \psi(\|x\|_{C}(\tau))\|v\|^{2\gamma -1}_{C}(\tau)\, d\tau < \infty . \end{equation*} In view of \eqref{PsiEst1} we have
$\Psi(\|x\|_{C}(t_{C}))<\infty$.
$\bullet$ Case $\gamma \in (1,\frac{3}{2})$ :
\\This case is more elaborate. We know that $\|v\|_{C}(t)$ can only vanish at $t_{C}$, otherwise because of \eqref{In2} it would be $0$ on some interval $(s,t_{C})$ and $t_{C}$ cannot be the time of the first collision. Thus, we have
\begin{equation}\label{In3} \frac{d}{dt}\|v\|_{C}(t)
\leq -c_{0}\psi(\|x\|_{C})\|v\|_{C}^{2 \gamma -1}(t)
+c_{1}\|x\|_{C}(t) +c_{2}\|v\|_{C}(t) .\end{equation} Although there is no Gronwall lemma we can use for \eqref{In3}, we can reach a contradiction doing a bit of qualitative analysis in \eqref{In3}. The idea is actually pretty simple: For the three terms that appear in the rhs of \eqref{In3} we study what happens when each of them is the dominant as $t \nearrow t_{C}$ . For this, we consider the following three cases
\begin{equation*}(C1) \quad \psi(\|x\|_{C}(t))\|v\|_{C}^{2 \gamma
-1}(t) <\|x\|_{C}(t), \qquad (C2) \quad
\psi(\|x\|_{C}(t))\|v\|_{C}^{2 \gamma -1}(t)< \|v\|_{C}(t) \end{equation*} and
\begin{equation*} (C3)\quad \frac{d}{dt}\|v\|_{C}(t)\leq
-\psi(\|x\|_{C}(t))\|v\|_{C}^{2\gamma -1}(t). \end{equation*} Notice that for now we make the assumption that constants $c_{0}=c_{1}=c_{2}=1$. Later on we keep close track of all the constants involved.
We begin by checking what happens when each of $(C1)$-$(C3)$ holds on some interval $(t_{0},t_{C})$. When $(C1)$ holds on an interval
$(t_{0},t_{C})$, we show that $\|v\|_{C}$ is practically so small that a collision cannot happen in finite time. Indeed, we have
\begin{equation*} \|v\|_{C}(t) < \left(
\frac{\|x\|_{C}(t)}{\psi(\|x\|_{C}(t))} \right)^{\frac{1}{2\gamma
-1}}=\|x\|_{C}^{\frac{\alpha +1}{2\gamma -1}}(t) \qquad \text{for} \quad t \geq t_{0}.\end{equation*} By \eqref{EqMot}, we have
$\frac{d}{dt}\|x\|_{C}(t) > -\|v\|_{C}(t)\rightsquigarrow
\frac{d}{dt}\|x\|_{C}(t)>-\|x\|_{C}^{\frac{\alpha +1}{2\gamma -1}}(t)$. We solve this differential inequality by integrating from $t_{0}$ to $t$ to get
\begin{equation}\label{Sol1} \|x\|_{C}(t)
> \left(\|x\|_{C}^{\lambda}(t_{0})-\lambda (t-t_{0})\right)^{\frac{1}{\lambda}},\end{equation} where $\lambda =1-\frac{\alpha +1}{2\gamma -1}=\frac{2(\gamma -1)-\alpha}{2\gamma
-1}< 0$. Now, setting $t=t_{C}$ in \eqref{Sol1} leads to an obvious contradiction since $\|x\|_{C}(t)>0$ for all $t\geq t_{0}$. It is useful for our proof to compute the change of
$\Psi(\|x\|_{C}(\cdot))$ on the interval $(t_{0},t_{C})$.
\begin{align}\label{PsiEst2}|\Psi(\|x\|_{C}(t_{C}))|-|\Psi(\|x\|_{C}(t_{0}))|
\leq \int_{t_{0}}^{t_{C}}\psi(\|x\|_{C}(\tau)) \|v\|_{C}(\tau)\, d\tau < \int_{t_{0}}^{t_{C}}\|x\|_{C}^{\mu}(\tau)\, d\tau ,\end{align} where $\mu=-\alpha +\frac{\alpha +1}{2\gamma -1}=\frac{-2\alpha (\gamma -1)+1}{2\gamma -1}$. If $\mu \geq 0$, it follows trivially that
$|\Psi(\|x\|_{C}(t_{C}))|-|\Psi(\|x\|_{C}(t_{0}))| \leq R^{\mu}(t_{C}-t_{0})$. If on the other hand $\mu <0$, then we have using \eqref{Sol1} in \eqref{PsiEst2}
\begin{align} \nonumber |\Psi(\|x\|_{C}(t_{C}))|-|\Psi(\|x\|_{C}(t_{0}))|
&< \int_{t_{0}}^{t_{C}} (\|x\|_{C}^{\lambda}(t_{0})-\lambda (\tau-t_{0}))^{\nu} \, d\tau \\\nonumber & =\frac{1}{-\lambda (\nu
+1)}\left( \|x\|_{C}^{\lambda}(t_{0})-\lambda
(\tau-t_{0})\right)^{\nu +1}|_{t_{0}}^{t_{C}} \\ \label{PsiEst3}&=-\frac{1}{\lambda (\nu +1)}
\left( (\|x\|_{C}^{\lambda}(t_{0})-\lambda (t_{C}-t_{0}))^{\nu
+1}-\|x\|_{C}^{\lambda(\nu +1)}(t_{0})\right) < \infty , \end{align} where $\nu=\frac{\mu}{\lambda}=\frac{-2\alpha (\gamma -1)+1}{2(\gamma -1)-\alpha}>0$. The last inequality implies that
$|\Psi(\|x\|_{C}(t_{C}))|-|\Psi(\|x\|_{C}(t_{0}))| \leq \frac{C_{\nu}}{\nu +1} (t_{C}-t_{0})$, for some constant
$C_{\nu}:=C(\nu)>0$, due to the basic inequality $|a^p-b^p| \leq C_{p}|a-b|$, for $0<a,b<1$, $p\geq 1$.
Similarly, if $(C2)$ holds on some interval $(t_{0},t_{C})$ we have
\begin{equation*} \|v\|_{C}(t)<\left(
\frac{1}{\psi(\|x\|_{C}(t))}\right)^{\frac{1}{2\gamma
-2}}=\|x\|_{C}^{\frac{\alpha}{2\gamma -2}}(t) \qquad \text{for} \quad t \geq t_{0} .\end{equation*} By \eqref{EqMot}, we have
$\frac{d}{dt}\|x\|_{C}(t)>-\|x\|_{C}^{\frac{\alpha}{2(\gamma
-2)}}(t)$. The solution is once again given by \eqref{Sol1}, only this time $\lambda=\frac{2(\gamma -1)-\alpha}{2(\gamma -1)}<0$. Setting $t=t_{C}$ in \eqref{Sol1} we get a contradiction like before. Also, the change of $\Psi(\|x\|_{C}(\cdot))$ over $(t_{0},t_{C})$ is bounded like in \eqref{PsiEst2}, with $\mu=-\alpha +\frac{\alpha}{2\gamma -2}$. Now
$\mu>0$ and we get the estimate $|\Psi(\|x\|_{C}(t_{C}))|-|\Psi(\|x\|_{C}(t_{0}))| \leq
R^{\mu}(t_{C}-t_{0})$. Overall, we have shown that $|\Psi(\|x\|_{C}(\cdot))|$ is Lipshitz in time when (C1) or (C2) holds, with a constant $C_{\mu}=R^{\mu}$ for $\mu \geq 0$, or $C_{\mu}=\frac{C_{\nu}}{\nu +1}$ for $\mu <0$.
Finally, we check what happens if $(C3)$ holds on interval
$(t_{0},t_{C})$. In this case, we show that although $\|v\|_{C}(t)$ is not necessarily ``small'', it dissipates so quickly to $0$ that the collision cannot happen in finite time. Using a Grownall inequality for $(C3)$ we have
\begin{equation} \label{Sol2}\|v\|_{C}(t) \leq \left( \|v\|^{2-2\gamma}_{C}(t_{0})
+2(\gamma -1)\int_{t_{0}}^{t}\psi(\|x\|_{C}(\tau))\, d\tau \right)^{1/(2-2\gamma)} \qquad \text{for} \quad t \geq t_{0}. \end{equation} Then substituting eq. \eqref{Sol2} in \eqref{PsiEst1} we have
\begin{align*} |\Psi(\|x \|_{C}(t_{C}))| &\leq \int_{t_{0}}^{t_{C}}\psi(\|x \|_{C}(t)
\left( \|v\|^{2-2\gamma}_{C}(t_{0}) +2(\gamma
-1)\int_{t_{0}}^{t}\psi(\|x\|_{C}(\tau))\, d\tau
\right)^{\frac{1}{(2-2\gamma)}} dt +|\Psi(\|x \|_{C}(t_{0}))|
\\ &= -\frac{1}{3-2\gamma}\left( \|v\|^{2-2\gamma}_{C}(t_{0})
+2(\gamma -1)\int_{t_{0}}^{t}\psi(\|x\|_{C}(\tau))\, d\tau
\right)^{\frac{3-2\gamma}{2-2\gamma}} \Big|_{t=t_{0}}^{t=t_{C}}
+|\Psi(\|x\|_{C}(t_{0}))|
\\ &\leq \frac{1}{3-2\gamma}\|v\|^{3-2\gamma}_{C}(t_{0})
+|\Psi(\|x\|_{C}(t_{0}))|< \infty .\end{align*} We may get a similar estimate for (C3) by integrating the inequality
$\frac{1}{(3-2\gamma)}\frac{d}{dt}\|v\|^{3-2\gamma}_{C}\leq
-\psi(\|x\|_{C})\|v\|_{C}$ (which we get if we multiply (C3) by
$\|v\|^{2-2\gamma}_{C}$). Hence,
\begin{equation} \label{PsiEst4}|\Psi(\|x\|_{C}(t))|-|\Psi(\|x\|_{C}(s))| \leq - \frac{1}{(3-2\gamma)}
(\|v\|^{3-2\gamma}_{C}(t)-\|v\|^{3-2\gamma}_{C}(s)).\end{equation}
We now proceed to the last part of the proof. We have investigated what happens when there is an interval $(t_{0},t_{C})$ over which one of the terms in the rhs of \eqref{In3} is dominant over the others. Of course, this is by no means the only possible scenario. In reality, we have to exclude the possibility of infinite ``oscillations'' between cases (C1)-(C3) right before the collision. Therefore, we divide the interval $(t_{0},t_{C})$ into two regions depending on the dominant terms in \eqref{In3}, i.e. \begin{align}
\label{I1} & \frac{1}{2}c_{0}\psi(\|x\|_{C}(t))\|v\|_{C}^{2\gamma
-1}(t)< c_{1}\|x\|_{C}(t), \quad t \in I_{1}=(t_{0},t_{1})\cup \ldots \cup(t_{2n},t_{2n+1})\cup \ldots \\ &
\label{I2} \frac{d}{dt}\|v\|_{C}(t)<-\frac{1}{2}c_{0}
\psi(\|x\|_{C}(t))\|v\|_{C}^{2\gamma-1}(t)+c_{2}\|v\|_{C}(t), \quad t \in I_{2}=(t_{1},t_{2})\cup \ldots \cup(t_{2n+1},t_{2n+2})\cup \ldots \end{align}
We start with region $I_{1}$ which is practically case (C1) that we studied earlier. When $t \in I_{1}$, we have $\|v\|_{C}(t) < c_{3}
\|x\|^{\frac{\alpha +1}{2\gamma -1}}_{C}(t)$,
$c_{3}=\left(\frac{2c_{1}}{c_{0}}\right)^{1/(2\gamma -1)}$. We have already proved the Lipshitz property of $\Psi(\|x\|_{C}(\cdot))$ on any interval in $I_{1}$. Region $I_{2}$ is a ``hybrid'' of cases
(C2) and (C3), and we want to know how $\Psi(\|x\|_{C}(\cdot))$ changes on some interval in $I_{2}$ . First we multiply eq.
\eqref{I2} by $\|v\|^{2-2\gamma}_{C}(t)$ to get
\begin{align} \label{In4} \frac{1}{3-2\gamma}\frac{d}{dt}\|v\|^{3-2\gamma}_{C}(t)<
-\frac{1}{2}c_{0}\psi(\|x\|_{C}(t))\|v\|_{C}(t)+c_{2}\|v\|^{3-2\gamma}_{C}(t).\end{align} Now using the definition of $\Psi(\cdot)$ and integrating on the interval $(t_{2k+1},t_{2k+2})$ we get an expression which is the equivalent to \eqref{PsiEst4} for ineq. \eqref{I2}
\begin{align} \label{PsiEst5}|\Psi(\|x\|_{C}(t_{2k+2}))|-|\Psi(\|x\|_{C}(t_{2k+1}))| &\leq - \frac{2}{(3-2\gamma)c_{0}}
(\|v\|^{3-2\gamma}_{C}(t_{2k+2})-\|v\|^{3-2\gamma}_{C}(t_{2k+1}))
\\ \nonumber &+\frac{2c_{2}}{c_{0}}\int_{t_{2k+1}}^{t_{2k+2}}\|v\|^{3-2\gamma}_{C}(\tau)\, d\tau . \end{align}
The second rhs term in \eqref{PsiEst5} is bounded by $c_{4}(t_{2k+2}-t_{2k+1})$, with $c_{4}=\frac{2c_{2}}{c_{0}}M^{3-2\gamma}$. Taking the sum of the second term from $k=1$ to $n$ we have $\sum \limits_{k=0}^{n}
\frac{2c_{2}}{c_{0}}\int_{t_{2k+1}}^{t_{2k+2}}\|v\|^{3-2\gamma}_{C}(\tau)\, d\tau \leq c_{4} m(I_{2})$, where $m(I_{2})$ is the Lebesgue measure of $I_{2}$. For the first term, we have from ineq. \eqref{In4} after we integrate from $t_{2k+1}$ to $t_{2k+2}$ that
$\|v\|^{3-2\gamma}_{C}(t_{2k+2})-\|v\|^{3-2\gamma}_{C}(t_{2k+1})\leq (3-2\gamma)c_{2}M^{3-2\gamma}(t_{2k+2}-t_{2k+1})$ and \begin{equation*} \sum
\limits_{k=0}^{n}(\|v\|^{3-2\gamma}_{C}(t_{2k+2})-\|v\|^{3-2\gamma}_{C}(t_{2k+1}))\leq (3-2\gamma)c_{2}M^{3-2\gamma} m(I_{2}) , \quad \forall n\geq 0. \end{equation*} Of course, this bound is not enough since we need a bound of this term from below!
The trick here lies in the fact that $\|v\|_{C}(t)$ does not change drastically on $I_{1}$. We have shown that
$|\|x\|^{\lambda}_{C}(t_{2k+1})-\|x\|^{\lambda}_{C}(t_{2k})|\leq -\lambda c_{3}(t_{2k+1}-t_{2k})$, where $\lambda=1-\frac{\alpha +1}{2\gamma -1}<0$. Using the fact that
$\|v\|_{C}(t_{k})=c_{3}\|x\|^{\frac{\alpha +1}{2\gamma -1}}_{C}(t_{k})$ and Lemma 2 (with $p=\frac{-\lambda (2\gamma -1)}{\alpha +1}$, and $q=3-2\gamma$), we get
\begin{equation*}|\|v\|^{3-2\gamma}_{C}(t_{2k+1})-\|v\|^{3-2\gamma}_{C}(t_{2k})|< c_{5}(t_{2k+1}-t_{2k}),\qquad \text{for} \quad c_{5}=\frac{-\lambda c_{3}^{\frac{\lambda (2\gamma -1)}{\alpha +1}+1}}{C_{pq}}. \end{equation*} Hence, taking the sum we get \begin{equation}\label{In5}\sum
\limits_{k=0}^{n}|\|v\|^{3-2\gamma}_{C}(t_{2k+1})-\|v\|^{3-2\gamma}_{C}(t_{2k})|< c_{5} m(I_{1}), \quad \forall n\geq 0 .\end{equation} Finally, using \eqref{In5} we have
\begin{equation*} -\sum \limits_{k=0}^{n}(\|v\|^{3-2\gamma}_{C}
(t_{2k+2})-\|v\|^{3-2\gamma}_{C}(t_{2k+1}))\leq c_{5}
m(I_{1})+\|v\|^{3-2\gamma}_{C}(t_{0})-\|v\|^{3-2\gamma}_{C}(t_{2n+2}), \quad \forall n\geq 0 .\end{equation*}
We now decompose $|\Psi(\|x\|_{C}(t_{n}))|$ in the following manner
\begin{align*} |\Psi(\|x\|_{C}(t_{2n+2}))|&=\sum \limits_{k=0}^{n} \left(
|\Psi(\|x\|_{C}(t_{2k+2}))| -|\Psi(\|x\|_{C}(t_{2k+1}))| \right)
\\&+ \sum \limits_{k=0}^{n} \left( |\Psi(\|x\|_{C}(t_{2k+1}))| -
|\Psi(\|x\|_{C}(t_{2k}))| \right)+
|\Psi(\|x\|_{C}(t_{0}))|.\end{align*} We have done all the work required to bound the two sums and show that
$|\Psi(\|x\|_{C}(t_{2n+2}))|<\infty$. We mention that a decomposition could also be performed for
$|\Psi(\|x\|_{C}(t_{2n+1}))|$ with terms that are treated in similar manner. We have shown how we can bound the first term of this decomposition when we treated the terms that appear in \eqref{PsiEst5}. Indeed,
\begin{align*} \sum \limits_{k=0}^{n} \left( |\Psi(\|x\|_{C}(t_{2k+2}))| -
|\Psi(\|x\|_{C}(t_{2k+1}))| \right) \leq \frac{2 c_{5}
m(I_{1})+2\|v\|^{3-2\gamma}_{C}(t_{0})-2\|v\|^{3-2\gamma}_{C}(t_{2n+2})}{(3-2\gamma)c_{0}} +c_{4} m(I_{2}).\end{align*} The second term of the decomposition can be easily bounded due to the Lipshitz property that we showed for (C1), so
\begin{align*} \sum \limits_{k=0}^{n} \left( |\Psi(\|x\|_{C}(t_{2k+1}))| -
|\Psi(\|x\|_{C}(t_{2k}))| \right) \leq c_{3} C_{\mu}\sum \limits_{k=0}^{n}(t_{2k+1}-t_{2k}) \leq c_{3} C_{\mu} m(I_{1})<\infty \qquad \forall n\geq 0.\end{align*}
Putting those two sums together, we have
\begin{align*} |\Psi(\|x\|_{C}(t_{C}))|\leq \limsup
|\Psi(\|x\|_{C}(t_{n}))|&< c_{3}C_{\mu} m(I_{1})+c_{4}m(I_{2}) \\&+ \frac{2 c_{5}
m(I_{1})+2\|v\|^{3-2\gamma}_{C}(t_{0})}{(3-2\gamma)c_{0}}
+|\Psi(\|x\|_{C}(t_{0}))| <\infty ,\end{align*} which contradicts our hypothesis of a collision at time $t_{C}$.
\end{proof}
\section{Uniform estimates on the particle distance for the communication weight $\psi_{\delta}(s)=(s-\delta)^{-\alpha}$, with $ \alpha \geq 2 \gamma$.}
In this section, we give estimates for the minimum inter-particle distance in the case of weights of the type $\psi_{\delta}(s)=(s-\delta)^{-\alpha}$ for some fixed $\delta \geq 0$. We introduce the distance function $\mathcal{L}^{\beta}(t)$ for the particle system $(x_{i}(t),v_{i}(t))$, with
$|x_{i}-x_{j}|>\delta$ for $1 \leq i \neq j \leq N$. \begin{align*} \mathcal{L}^{\beta}(t):=\frac{1}{N(N-1)}\sum
\limits_{i\neq j}(|x_{i}(t)-x_{j}(t)|-\delta)^{-\beta}\qquad \text{with} \quad \beta >0 .\end{align*} The symbol $\sum \limits_{i \neq j}$ is short for the sum over all pairs $i,j$ for which $i \neq j$. For the special case $\beta=0$ we define $\mathcal{L}^{0}(t):=\frac{1}{N(N-1)}\sum \limits_{i\neq j}\log
(|x_{i}(t)-x_{j}(t)|-\delta)$.
This function is chosen so that if $\mathcal{L}^{\beta}(t)<\infty$
on some interval $[0,T]$, then it follows that particles do not collide and that $|x_{i}(t)-x_{j}(t)|>\delta $ for $i \neq j$ on
$[0,T]$, provided that $|x_{i0}-x_{j0}|>\delta $ for $i\neq j$. We will in fact show that if $\mathcal{L}^{\beta}(0)<\infty$ and given any $T>0$, we have that $\mathcal{L}^{\beta}(t)< O(T)$ for all $t \in [0,T]$. Of course, the choice of $\beta$-distance we use depends directly on $\alpha$ and $\gamma$. In the spirit of \cite{CaChMuPe}, we introduce the maximal collisionless life-span of a solution with initial datum $x_{0}$, i.e. \begin{equation*} T_{0}:=\sup \{s \geq 0: \forall \, \text{solution}\, \, (x(t),v(t))\, \text{to problem \eqref{NL-CS}, there are no collisions on}\, \, [0,s) \}.\end{equation*} We then prove \begin{theorem} Suppose that $\alpha \geq 2 \gamma$ and that the CS system has initial data $(x_{i0},v_{i0})$ satisfying
$|x_{i0}-x_{j0}|>\delta$ for $1 \leq i \neq j \leq N$. Then, for any global smooth solution $(x(t),v(t))$ to the NL CS particle system we have $T_{0}=\infty$. Moreover, we have the following estimates for $t \in [0,T_{0})$: \\(i) For $\alpha =2 \gamma$ we have \begin{align} \label{Es1} \mathcal{L}^{0}(t)+
\frac{1}{2C_{1} \gamma (N-1)}\sum \limits_{i}|v_{i}(t)|^2 \leq \frac{2 \gamma -1}{2 \gamma}t+ \mathcal{L}^{0}(0)+ \frac{1}{2 C_{1}
\gamma (N-1)}\sum \limits_{i}|v_{i0}|^2 . \end{align} \\(ii) For $\alpha >2 \gamma$ we choose $\beta=\frac{\alpha}{2\gamma} -1$ and have \begin{align}\label{Es2}\mathcal{L}^{\beta}(t)+
\frac{\beta}{2C_{1} \gamma (N-1)}\sum \limits_{i}|v_{i}(t)|^2 \leq \frac{(2 \gamma -1)\beta}{2 \gamma}t +\mathcal{L}^{\beta}(0)+
\frac{\beta}{2C_{1} \gamma (N-1)}\sum \limits_{i}|v_{i0}|^2 . \end{align} \end{theorem}
\begin{proof} First, observe that for $\beta > 0$ we have \begin{equation*} \frac{d}{dt}\mathcal{L}^{\beta}(t)=-\frac{\beta}{N(N-1)}\sum
\limits_{i\neq j}(|x_{i}-x_{j}|-\delta)^{-\beta-1} \left \langle
\frac{x_{i}-x_{j}}{|x_{i}-x_{j}|}, v_{i}-v_{j}\right \rangle \end{equation*} and \begin{equation*}\frac{d}{dt}\mathcal{L}^{0}(t)=\frac{1}{N(N-1)}\sum
\limits_{i\neq j}(|x_{i}-x_{j}|-\delta)^{-1} \left \langle
\frac{x_{i}-x_{j}}{|x_{i}-x_{j}|}, v_{i}-v_{j}\right\rangle \end{equation*} \\(i) If $\alpha =2 \gamma$, then we choose the $\beta$-distance with $\beta=\alpha / 2 \gamma -1=0$. We then have \begin{align*} \frac{d}{dt}\mathcal{L}^{0}(t)&=\frac{1}{N(N-1)}\sum \limits_{i \neq j}
(|x_{i}-x_{j}|-\delta)^{-\frac{\alpha}{2 \gamma}} \left \langle
\frac{x_{i}-x_{j}}{|x_{i}-x_{j}|}, v_{i}-v_{j} \right \rangle \\ & \leq \frac{1}{N(N-1)}\left( \frac{2 \gamma -1}{2 \gamma}\sum \limits_{i \neq j} 1 + \frac{1}{2 \gamma}\sum \limits_{i \neq j}
(|x_{i}-x_{j}|-\delta)^{-\alpha}|v_{i}-v_{j}|^{2 \gamma} \right) \\ &\leq \frac{2 \gamma -1}{2 \gamma}+ \frac{1}{2\gamma N(N-1)}
\sum \limits_{i \neq j} \psi_{\delta}(|x_{i}-x_{j}|)|v_{i}-v_{j}|^{2 \gamma}. \end{align*} Here we used Young's inequality $ab\leq \frac{a^p}{p}+\frac{b^q}{q}$, for
$a=(|x_{i}-x_{j}|-\delta)^{-\frac{\alpha}{2 \gamma}}|v_{i}-v_{j}|$, $b=1$ and $p=2 \gamma$, $q=\frac{2 \gamma}{2 \gamma -1}$. Finally, by using the second moment estimate in \eqref{mom-eq} we have \begin{equation*} \frac{d}{dt}\left( \mathcal{L}^{0}(t)+
\frac{1}{2C_{1} \gamma (N-1)}\sum \limits_{i}|v_{i}(t)|^2\right) \leq \frac{2 \gamma -1}{2\gamma}.\end{equation*} Integrating from $0$ to $t$ we get our estimate. \\(ii) If $\alpha >2 \gamma$, we choose $\beta=\frac{\alpha}{2 \gamma}-1$ once again. We similarly have \begin{align*} \frac{d}{dt}\mathcal{L}^{\beta}(t)\leq \frac{(2\gamma -1)\beta}{2 \gamma}+ \frac{\beta}{2\gamma N(N-1)}
\sum \limits_{i \neq j} \psi_{\delta}(|x_{i}-x_{j}|)|v_{i}-v_{j}|^{2 \gamma},\end{align*} that yields \begin{equation*} \frac{d}{dt}\left( \mathcal{L}^{\beta}(t)+
\frac{\beta}{2C_{1} \gamma (N-1)}\sum \limits_{i}|v_{i}(t)|^2\right) \leq \frac{(2 \gamma -1)\beta}{2\gamma},\end{equation*} which gives our estimate.
\end{proof}
\begin{remark} We note that the estimates \eqref{Es1}-\eqref{Es2} we just gave generalize the ones in \cite{CaChMuPe}, as they are valid for any $\gamma >\frac{1}{2}$. Another interesting observation is that there is no need to use Gronwall's lemma for their derivation. As a result, the minimum inter-particle distance estimate has a growth in time that is $O(t)$ instead of $O(e^{Ct})$ which improves the derived estimates in \cite{CaChMuPe}. \end{remark}
We may now give a slightly more general version of the uniform estimates presented above. For this, we assume that the communication weight is not necessarily of the type $\psi(s)=s^{-\alpha}$, but has a primitive $\Psi(\cdot)$ that is singular at $0$, and the rate at which $\Psi$ becomes singular at $0$ is sufficiently fast. We introduce a $\beta$-distance related to this $\Psi(\cdot)$ by \begin{equation*} \mathcal{L}^{\beta}(t):=\frac{1}{N(N-1)}
\sum \limits_{i \neq j} |\Psi(|x_{i}(t)-x_{j}(t)|)|^{\beta} \qquad \text{for}\quad \beta>0.\end{equation*} Similarly, we define $\mathcal{L}^{0}(t):=\frac{1}{N(N-1)}\sum \limits_{i \neq j}\log
|\Psi(|x_{i}(t)-x_{j}(t)|)|$. Then, with a computation based on elementary techniques like in the previous result, we show
\begin{theorem} Consider system \eqref{NL-CS} with $\gamma > \frac{1}{2}$ and initial data $(x_{0},v_{0})$ that satisfy \begin{equation*} x_{i0}\neq x_{j0}\qquad \text{for} \quad i \neq j .\end{equation*} We also assume that the communication weight $\psi(\cdot)$ has a primitive $\Psi(s)$ that is singular at $s=0$ and satisfies the inequality \begin{equation} \label{As1} \Psi'(s) \leq C
|\Psi(s)|^{(1-\beta)2\gamma /(2\gamma -1)} \qquad \text{for some} \quad C>0 \, \, \, \text{and} \, \, \, 1> \beta \geq 0. \end{equation} We have that any solution to \eqref{NL-CS} remains non-collisional for $t>0$. Moreover, we have for $\beta>0$ \begin{align}\label{Es3}\mathcal{L}^{\beta}(t)+
\frac{\beta}{2C_{1} \gamma (N-1)}\sum \limits_{i}|v_{i}(t)|^2 \leq C \frac{(2 \gamma -1)\beta}{2 \gamma}t +\mathcal{L}^{\beta}(0)+
\frac{\beta}{2C_{1} \gamma (N-1)}\sum \limits_{i}|v_{i0}|^2 , \end{align} and for $\beta=0$ \begin{align} \label{Es4} \mathcal{L}^{0}(t)+
\frac{1}{2C_{1} \gamma (N-1)}\sum \limits_{i}|v_{i}(t)|^2 \leq C \frac{2 \gamma -1}{2 \gamma}t+ \mathcal{L}^{0}(0)+ \frac{1}{2C_{1}
\gamma (N-1)}\sum \limits_{i}|v_{i0}|^2 . \end{align} \end{theorem}
\begin{proof} First, let us calculate the time evolution of $\mathcal{L}^{\beta}(t)$ for $\beta >0$ \begin{align*} \frac{d}{dt}\mathcal{L}^{\beta}(t)&=\frac{\beta}{N(N-1)} \sum \limits_{i \neq j}
\Psi'(|x_{i}-x_{j}|)|\Psi(|x_{i}-x_{j}|)|^{\beta
-1}\frac{\Psi(|x_{i}-x_{j}|)}{|\Psi(|x_{i}-x_{j}|)|}\left \langle
\frac{x_{i}-x_{j}}{|x_{i}-x_{j}|},v_{i}-v_{j} \right \rangle
\\ &\leq \frac{\beta(2 \gamma -1)}{2 \gamma N(N-1)} \sum \limits_{i \neq j}\Psi'(|x_{i}-x_{j}|)
|\Psi(|x_{i}-x_{j}|)|^{(\beta -1)2\gamma /(2 \gamma -1)} \\
& \qquad \qquad + \frac{\beta}{2 \gamma N(N-1)} \sum \limits_{i \neq j}\Psi'(|x_{i}-x_{j}|) |v_{i}-v_{j}|^{2 \gamma}. \end{align*}
Once again we made use of Young's inequality for $a=|v_{i}-v_{j}|$,
$b=|\Psi(|x_{i}-x_{j}|)|^{\beta -1}$ and $p=2 \gamma$, $q=\frac{2 \gamma}{2 \gamma -1}$.
Now using condition \eqref{As1} and the second moment estimate in \eqref{mom-eq} we get \begin{equation*} \frac{d}{dt}\left( \mathcal{L}^{\beta}(t) +\frac{\beta}{2C_{1}\gamma (N-1)}\sum
\limits_{i}|v_{i}(t)|^2\right)\leq C \frac{\beta (2\gamma -1)}{2\gamma}.\end{equation*} For $\beta=0$, we get \begin{equation*} \frac{d}{dt}\left( \mathcal{L}^{0}(t)
+\frac{1}{2C_{1}\gamma (N-1)}\sum \limits_{i}|v_{i}(t)|^2\right)\leq C \frac{2\gamma -1}{2\gamma}.\end{equation*} \end{proof}
E-MAIL: [email protected]
\end{document} | arXiv |
\begin{definition}[Definition:Classes of WFFs/Plain WFF]
A '''plain WFF''' of predicate logic is a WFF with no parameters.
Thus $\map {WFF} {\PP, \FF, \O}$ is the set of all '''plain WFFs''' with relation symbols from $\PP$ and function symbols from $\FF$.
{{ExtractTheorem|However immediate, this goes on a proof page}}
It is immediate that a '''plain WFF''' is a WFF with parameters from $\KK$ for ''all'' choices of $\KK$.
\end{definition} | ProofWiki |
<< Thursday, October 18, 2018 >>
Dance, Heritage, and the Island: A Cuban in Oakland with Royland Lobato
Lecture | October 18 | Berkeley Art Museum and Pacific Film Archive
Speaker/Performer: Royland Lobato
Sponsor: Arts + Design
In 2005, Royland Lobato arrived in the Bay Area from his native Cuba. Born in Guantanamo, his fascination with the folklore of the island drove him to become a teacher of Cuba's musical and dance traditions, especially its Afro-Cuban elements, but also its contemporary popular expression, such as rueda de casino, rumba, son, and other forms. In this lecture, Lobato will discuss his experience as... More >
Coping with Backlash Against Globalization: National and Firm Strategies
Conference/Symposium | October 18 – 19, 2018 every day | 9 a.m.-6 p.m. | 180 Doe Library
Sponsors: Institute of East Asian Studies (IEAS), Mr. & Mrs. S.H. Wong Center for the Study of Multinational Corporations, Berkeley APEC Study Center (BASC), Center for Long-term Cyber Security, MSPL Ltd, The Clausen Center, Center for Chinese Studies (CCS), Center for Japanese Studies (CJS), Center for Korean Studies (CKS), Institute for South Asia Studies, Institute of International Studies
The rise of trade protectionism, authoritarianism, China, and data competition are all critical drivers of the global economy. We have seen the consequences of these drivers in the move to Brexit, the election of Trump, the promotion of rival trade and financial arrangements by the Chinese, and cyber operations that are a form of societal warfare... More >
Applied Math Seminar: Dynamical mean-field theory to strongly correlated electronic systems
Seminar | October 18 | 11 a.m.-12 p.m. | 732 Evans Hall
Speaker: Jianxin Zhu, Los Alamos National Laboratory
Sponsor: Department of Mathematics
Electronic correlation effects play an import role in emergent phenomena such as Mott-insulator-metal transition and unconventional superconductivity. Understanding these effects present a theoretical challenge. In this talk, we will give an overview of dynamical mean-field theory (DMFT) and its combination with the local density approximation in density functional theory. Representative quantum... More >
Econ 235, Financial Economics Seminar: "TBA"
Seminar | October 18 | 11:10 a.m.-12:30 p.m. | C210 Haas School of Business
Speaker/Performer: Haoxiang Zhu, MIT
Sponsor: Department of Economics
Joint with the Haas Finance Seminar
Bancroft Library Roundtable: Education as the Project of Freedom: A Study of the Berkeley Experimental Schools Project, 1968-76
Lecture | October 18 | 12-1 p.m. | Faculty Club, O'Neill Room
Speaker/Performer: Joanne Tien, UC Berkeley
Sponsor: Bancroft Library
Joanne Tien will discuss how teachers and students in the Berkeley Experimental Schools Project navigated the ideological tension between constructivist pedagogical approaches and the cultivation of explicit political values that challenge systems of oppression.
Attendance restrictions: The O'Neill Room has a maximum capacity of 28 people. The doors will be shut and no more attendees may enter once the room is at capacity.
Oliver E. Williamson Seminar: "Norms in Bargaining: Evidence from Government Formation"
Seminar | October 18 | 12-1:30 p.m. | C330 Haas School of Business
Speaker: Thomas Fujiwara, Princeton
The Oliver E. Williamson Seminar on Institutional Analysis, named after our esteemed colleague who founded the seminar, features current research by faculty, from UCB and elsewhere, and by advanced doctoral students. The research investigates governance, and its links with economic and political forces. Markets, hierarchies, hybrids, and the supporting institutions of law and politics all come... More >
Cook Well Berkeley Healthy Cooking Series: One Pot Meals (BEUHS641)
Workshop | October 18 | 12:10-1 p.m. | Tang Center, University Health Services, Section Club
Speaker/Performer: Kim Guess, RD, Be well at Work - Wellness
Sponsor: Be Well at Work - Wellness
Making an entire meal in just one pot (or pan) can save you time not only in prep, but also in clean up! These recipes are not your mother's casserole – you will learn exciting and healthy new meal ideas. Presentation, demonstration, sample and recipes provided.
Registration info: Register online
Research Colloquium: Dr. Uri Aviram: Mental Health Reform: Lessons from the Israeli Experience
Colloquium | October 18 | 12:10-1:30 p.m. | Haviland Hall, Commons/116
Sponsor: Social Welfare, School of
In July 2015, a major mental health reform in Israel went into effect. It transferred responsibility for hospital and ambulatory mental health services from government to the health maintenance organizations (HMO), integrating mental health services into the general health care system. This study examined the opportunities and challenges associated with implementation of this reform and comments... More >
IB Seminar: Title to be announced
Seminar | October 18 | 12:30-1:30 p.m. | 2040 Valley Life Sciences Building | Canceled
Featured Speaker: Dan Costa, University of California, Santa Cruz
Sponsor: Department of Integrative Biology
IB Seminar: HIV drug resistance evolution: why it was rampant, and how it became rare
Seminar | October 18 | 12:30-1:30 p.m. | 2040 Valley Life Sciences Building
Featured Speaker: Pleuni Pennings, San Francisco State University
Seminar 251, Labor Seminar: "How workplace consolidation affects workers: Evidence from Germany"
Seminar | October 18 | 12:30-2 p.m. | 648 Evans Hall | Note change in time
Featured Speaker: Kevin Todd, UCB
Sponsor: Center for Labor Economics
CEDSOC Community Snacktime
Social Event | October 18 | 1-3 p.m. | Wurster Hall, Landscape Courtyard
Sponsor: College of Environmental Design Students of Color
Take a break from the grind of the semester and chat and enjoy snacks with your fellow students in the Landscape Courtyard!
Sponsor: Botanical Garden
Join us for a free, docent-led tour of the Garden as we explore interesting plant species, learn about the vast collection, and see what is currently in bloom. Meet at the Entry Plaza.
Free with Garden admission
Advanced registration not required
Tours may be cancelled without notice.
For day-of inquiries, please call 510-643-2755
For tour questions, please email [email protected]... More >
Seminar 251, Labor Seminar: "Housing Market Channels of Segregation"
Seminar | October 18 | 2-3:30 p.m. | 648 Evans Hall
Featured Speaker: Nicholas Li, UCB
Workshop | October 18 | 3-4 p.m. | 9 Durant Hall
Speaker: Leah Carroll, Haas Scholars Program Manager/Advisor, Office of Undergraduate Research and Scholarships
Sponsor: Office of Undergraduate Research
If you need to write a grant proposal, this workshop is for you! You'll get a headstart on defining your research question, developing a lit review and project plan, presenting your qualifications, and creating a realistic budget.
The workshop is open to all UC Berkeley students (undergraduate, graduate, and visiting scholars) regardless of academic discipline. It will be especially useful for... More >
A cortical reinforcement prediction error encoded by VIP interneurons
Seminar | October 18 | 3:30-4:30 p.m. | 101 Life Sciences Addition
Featured Speaker: Adam Kepecs, Cold Spring Harbor Laboratory
Sponsor: Department of Molecular and Cell Biology
This seminar is partially sponsored by NIH
Computing Elastic Parameters to Discover High Strength, Ductile Structural Alloys
Seminar | October 18 | 4-5 p.m. | 348 Hearst Memorial Mining Building
Speaker/Performer: Dr. Ian Stewart Winter, Postdoc, Materials Science & Engineering, UC Berkeley
Sponsor: Materials Science and Engineering (MSE)
The computer-aided discovery of structural alloys is a burgeoning area of research. A primary challenge in the field is to identify computable screening parameters that embody key structural alloy properties, such as strength and ductility. In this talk two parameters are introduced that attempt to deal with these two properties. First, an elastic anisotropy parameter that captures a material's... More >
The Screen in Sound: Toward a Theory of Listening
Lecture | October 18 | 4-6 p.m. | 370 Dwinelle Hall
Featured Speaker: Rey Chow, Anne Firor Scott Professor of Literature in Trinity College of Arts and Sciences, Duke University
Sponsor: Department of Gender and Women's Studies
This lecture is drawn from Rey Chow's chapter in the anthology Sound Objects (Duke UP, forthcoming), ed. James A. Steintrager and Rey Chow. By foregrounding crucial connections among sound studies, poststructuralist theory, and contemporary acousmatic experiences, the lecture presents listening as a trans-disciplinary problematic through which different fields of study resonate in fascinating ways.
Why the Constitution? The Problem of Taxes and Slavery
Lecture | October 18 | 4-6 p.m. | Dwinelle Hall, This is a webinar event.
Speaker/Performer: Robin Einhorn, Professor, Department of History
Sponsor: UC Berkeley History-Social Science Project
UCBHSSP is pleased to co-sponsor with the National Humanities Center, this virtual scholar talk with the Professor Robin Einhorn of the UC Berkeley Department of History.
This webinar will examine the relevant clauses of the Articles of Confederation and the Constitution, along with extracts from and letters about the key debates in the Continental Congress, Philadelphia convention, and some... More >
Attendance restrictions: This is a virtual event.
Majors, Minors, Declaring, Oh My! How to Find the Major That's Right For You: L&S Workshop Series Ursa Major
Workshop | October 18 | 4-6 p.m. | 3401 Dwinelle Hall
Performer Group:
Graduate Mentors, College of L&S
Sponsors: College of Letters & Science, L&S Graduate Mentors
A workshop focused on the different ways to research majors and interdisciplinary majors.
Seminar 242, Econometrics: "Reasonable Doubt: When are Callbacks a Crime?"
Speaker/Performer: Christopher Walters, UCB
Mathematics Department Colloquium: Singularities of solutions of the Hamilton-Jacobi equation. A toy model: distance to a closed subset.
Speaker: Albert Fathi, Georgia Institute of Technology
This is a joint work with Piermarco Cannarsa and Wei Cheng.
If A is a closed subset of the Euclidean space $R^k$, the Euclidean distance function $d_A : R^k \to [0, + \infty[$ is defined by
$$d_A(x) = \mathrm{min}_{a \in A} ||x − a||.$$
This function is Lipschitz, therefore differentiable almost everywhere. We will give topological properties of the set Sing(F) of points in $R^k \setminus... More >
BODY MUSIC: The Oldest Music on the Planet
Sponsor: Department of Music
Body Music, also known as Body Percussion and Body Drumming, is the oldest music on the planet. Before people were hollowing logs and slapping rocks, they were using their bodies to stomp, clap, sing, snap and grunt their musical ideas. There are many traditional Body Musics in the world, from African-American Hambone and Flamenco Palmas to Sumatran Saman and Ethiopian Armpit music. Join our... More >
Transformation Of Backward Politics In India: The Case Of Uttar Pradesh
Lecture | October 18 | 5-6:30 p.m. | 223 Moses Hall
Speaker/Performer: Gilles Verniers, Assistant Professor of Political Science and Director of the Trivedi Centre for Political Data, Ashoka University
Sponsors: Institute of International Studies, Institute for South Asia Studies
Electoral politics in the state of Uttar Pradesh is undergoing profound changes. A long phase of explicit caste and religion-based electoral politics has given way to inclusive political discourses and electoral strategies that have produced more diverse assemblies, in terms of caste and communities composition. At the same time, a new political class has emerged, grounded in local business... More >
UROC (Underrepresented Researchers of Color): Proposal/Personal Statement Writing
Workshop | October 18 | 5:30-6:30 p.m. | 9 Durant Hall
Thinking about research, but don't have experience writing a proposal? Come talk to us! We'll be giving a presentation on how to approach each step that goes along with putting together a proposal.
Race in Brazil: A Historical Overview
Lecture | October 18 | 6-8 p.m. | Hearst Museum of Anthropology
Sponsor: Phoebe A. Hearst Museum of Anthropology
Brazil was the site of the largest slave-based economy in the Americas and the last country in the hemisphere to abolish the institution. For most of the twentieth century, Brazil was described as a "racial democracy" – a place where clear racial categories and race-based discrimination do not exist. This presentation discusses the history of slavery, emancipation, and post-emancipation in Brazil... More >
Interdisciplinary Marxist Working Group
Meeting | September 20 – December 13, 2018 every other Thursday | 6-8 p.m. | 306 Wheeler Hall
Sponsor: Department of English
Please join us for this semester's first meeting of the IMWG on Thursday, Sept 6 from 6-8pm in the Wheeler English lounge. We will be continuing with where we left off last semester in Capital, with plans to finish volume 1 by early October.
No prior knowledge of Capital or Marx is required, and everyone is welcome. I'm attaching a rough schedule, as our readings after Capital vol 1 will... More >
Online Fact-Finding and the Future of Journalism
Panel Discussion | October 18 | 6-7 p.m. | North Gate Hall, Library
Sponsors: Graduate School of Journalism, Human Rights Center
*** LIVESTREAM the event at 6:00PM (PST): https://www.youtube.com/watch?v=zf0BavI1bB8 ***
Journalism, law, and human rights advocacy in the 21st century are ever-more dependent on online fact-finding and analysis—each upended by fundamental changes in technology that have hastened the flow of information and misinformation. And yet the craft and practice of accessing, parsing, and analyzing... More >
RSVP recommended
RSVP info: RSVP online
The Unconscious Is Structured Like a Workplace: Brainwork, Artwork and the Divided Labor of Thought in Late-Victorian Fiction
Lecture | October 18 | 6-7:30 p.m. | Wheeler Hall, 330, English Department Lounge
Speaker: Emily Steinlight, Stephen M. Gorn Family Assistant Professor of English, Penn Arts & Sciences
This talk will find a prehistory of the contemporary problematic of the "creative economy" in two late-Victorian novels of the art world, Oscar Wilde's The Picture of Dorian Gray and George Du Maurier's Trilby. Examining the relation they plot out between psychic processes and aesthetic production, it will assess how these narratives track art to unconscious sources that strangely resemble the... More >
Holloway Series in Poetry: Tom Pickard
Reading - Literary | October 18 | 6:30-10 p.m. | Wheeler Hall, 315, Maude Fife
Featured Performer: Tom Pickard
Sponsors: Department of English, Holloway Reading Series
Thought Lounge: Come for free dinner and a chance to talk with homeless community advocates and experts about the issues surrounding People's Park!
Meeting | October 18 | 9-11 p.m. | B1 Hearst Field Annex
Sponsor: Suitcase Clinic
The Suitcase Clinic's Advocacy Task Force, a student-run group involved in advocating for the locally underserved and homeless community, will be hosting an event called the Thought Lounge, an open space to learn and discuss more about the history and value of People's Park from 6:00-7:30 pm on Thursday, October 18 in Hearst Field Annex, Room B1. This event will feature FREE DINNER and a Q&A... More >
Attendance restrictions: First 25 People to RSVP at: tinyurl.com/thoughtlounge1
ARCHITECTURE EXHIBITION: PLACE, CULTURE, TIME - DESIGN IN DRASTICALLY CHANGING CHINA
Exhibit - Multimedia | August 29 – October 21, 2018 every day | 210 Wurster Hall
Sponsor: Environmental Design, College of
ON VIEW: AUG 29-OCT 21. Works of He Jingtang over the past three decades and their profound reflections on place, culture, time, and future urban development. Free and open to all!
Luminous Disturbances: Paintings by Kara Maria
Sponsor: Townsend Center for the Humanities
Kara Maria's "cheerfully apocalyptic" paintings engage with a host of political issues, including war and environmental destruction. On display at the Townsend Center for the Humanities Sept 10 - Dec 14, 2018.
Let there be laughter! This exhibition features Cal students' cartoons, jokes, and satire from throughout the years, selected from their humor magazines and other publications.
Immigration, Deportation and Citizenship, 1908-2018: Selected Resources from the IGS and Ethnic Studies Libraries
Exhibit - Artifacts | August 31 – December 10, 2018 every day | Moses Hall, IGS Library - 109 Moses
Sponsors: Institute of Governmental Studies Library, Ethnic Studies Library
"Immigration, Deportation and Citizenship, 1908-2018: Selected Resources from the IGS and Ethnic Studies Libraries" contains items from the Ethnic Studies Library and the Institute of Governmental Studies Library addressing historical attitudes and policy around immigration, deportation, and citizens' rights, as well as monographs and ephemera relating to current events.
The Handmaid's Tale: an exhibit at Moffitt Library
Exhibit - Multimedia | September 5 – December 31, 2018 every day | Moffitt Undergraduate Library, 3rd Floor near Elevators
Sponsor: Library
The new Moffitt Library exhibit explores the themes and antecedents of The Handmaid's Tale, this year's On the Same Page program selection. On exhibit are library materials and quotes that demonstrate that not only were we wrong to say "it can't happen here" - it has already happened, all over the world: Berlin, Nazi Germany, Argentina, and yes, here in the US.
Attendance restrictions: UC Berkeley ID required for entrance to Moffitt Library.
Califas: Art of the US-Mexico Borderlands
Exhibit - Multimedia | September 11 – November 16, 2018 every day | Richmond Art Center (2540 Barrett Ave, Richmond, CA)
ON VIEW: SEPT 11-NOV 16 @ the Richmond Art Center. The exhibition, co-curated by Professors Michael Dear & Ronald Rael, explores representations of the US-Mexico 'borderlands' in contemporary art. Free & open to all!
Art for the Asking: 60 Years of the Graphic Arts Loan Collection at the Morrison Library
Exhibit - Artifacts | September 17, 2018 – February 28, 2019 every day | Doe Library, Bernice Layne Brown Gallery
Art for the Asking: 60 Years of the Graphic Arts Loan Collection at the Morrison Library will be up in Doe Library's Brown Gallery until March 1st, 2019. This exhibition celebrates 60 years of the Graphic Arts Loan Collection, and includes prints in the collection that have not been seen in 20 years, as well as prints that are now owned by the Berkeley Art Museum. There are also cases dedicated... More >
Boundless: Contemporary Tibetan Artists at Home and Abroad
Exhibit - Painting | October 3, 2018 – May 26, 2019 every day | Berkeley Art Museum and Pacific Film Archive
Sponsor: Berkeley Art Museum and Pacific Film Archive
Featuring works by internationally renowned contemporary Tibetan artists alongside rare historical pieces, this exhibition highlights the ways these artists explore the infinite possibilities of visual forms to reflect their transcultural, multilingual, and translocal lives. Though living and working in different geographical areas—Lhasa, Dharamsala, Kathmandu, New York, and the Bay Area—the... More >
Harvey Quaytman: Against the Static
Exhibit - Painting | October 17, 2018 – January 27, 2019 every day | Berkeley Art Museum and Pacific Film Archive
The paintings of Harvey Quaytman (1937–2002) are distinct for their novel explorations of shape, drawing, texture, geometric pattern, and color application. While his works display a rigorous experimentation with formalism and materiality, they are simultaneously invested with rich undertones of sensuality, complexity, and humor. This new retrospective exhibition charts the trajectory of... More >
Art Wall: Barbara Stauffacher Solomon
Exhibit - Painting | August 15, 2018 – March 3, 2019 every day | Berkeley Art Museum and Pacific Film Archive
The 1960s architectural phenomenon Supergraphics—a mix of Swiss Modernism and West Coast Pop—was pioneered by San Francisco–based artist, graphic and landscape designer, and writer Barbara Stauffacher Solomon. Stauffacher Solomon, a UC Berkeley alumna, is creating new Supergraphics for BAMPFA's Art Wall. Land(e)scape 2018 is the fifth in a series of temporary, site-specific works commissioned for... More >
Old Masters in a New Light: Rediscovering the European Collection
Exhibit - Painting | September 19 – December 16, 2018 every day | Berkeley Art Museum and Pacific Film Archive
Since 1872, the University of California, Berkeley has been collecting works by European artists, building a collection that includes many rare and exceptional works distinguished by artistic innovation, emotional and psychological depth, and technical virtuosity. Consisting mostly of gifts from professors, alumni, and other supporters, the collection continues to evolve, representing artistic... More >
Bearing Light: Berkeley at 150
Exhibit - Artifacts | April 16, 2018 – February 28, 2019 every Monday, Tuesday, Wednesday, Thursday & Friday | 8 a.m.-5 p.m. | Bancroft Library, 2nd Floor Corridor
This exhibition celebrates the University of California's sesquicentennial anniversary with photographs, correspondence, publications, and other documentation drawn from the University Archives and The Bancroft Library collections. It features an array of golden bears, including Oski, and explores the illustrious history of UC Berkeley.
Face to Face: Looking at Objects That Look at You
Exhibit - Multimedia | March 10 – December 9, 2018 every Sunday, Wednesday, Thursday, Friday & Saturday | 11 a.m.-5 p.m. | Hearst Museum of Anthropology | Note change in date
For this Spring 2018 exhibit, entitled Face to Face: Looking at Objects that Look at You, the Hearst staff and 14 UC Berkeley freshmen have co-curated a global selection of objects that depict human faces in different ways. The exhibit asks: Why and how do crafting traditions of the world so often incorporate human faces, and how do people respond to those faces? Objects such as West African... More >
The Karaite Canon: Manuscripts and Ritual Objects from Cairo
Sponsor: Magnes Collection of Jewish Art and Life
The Karaite Canon highlights a selection from the over fifty manuscripts he brought to California, along with ritual objects belonging to Cairo's Karaite community.
Auditorium installation of high-resolution images of select collection items.
Acquired by The Magnes Collection of Jewish Art and Life in 2017 thanks to an unprecedented gift from Taube Philanthropies, the most significant collection of works by Arthur Szyk (Łódź, Poland, 1894 – New Canaan, Connecticut, 1951) is now available to the world in a public institution for the first time as... More >
Pièces de Résistance: Echoes of Judaea Capta From Ancient Coins to Modern Art
This exhibition will be continuing in Spring 2019.
Notions of resistance, alongside fears and realities of oppression, resound throughout Jewish history. As a minority, Jews express their political aspirations, ideals of heroism, and yearnings of retaliation and redemption in their rituals, art, and everyday life.
Centering on coins in The Magnes Collection, this exhibition explores how... More >
Project "Holy Land": Yaakov Benor-Kalter's Photographs of British Mandate Palestine, 1923-1940
Exhibit - Photography | August 28 – December 14, 2018 every Tuesday, Wednesday, Thursday & Friday | 11 a.m.-4:05 p.m. | Magnes Collection of Jewish Art and Life (2121 Allston Way)
For nearly two decades, Yaakov (Jacob) Benor-Kalter (1897-1969) traversed the Old City of Jerusalem, documenting renowned historical monuments, ambiguous subjects in familiar alleyways, and scores of "new Jews" building a new homeland. Benor-Kalter's photographs smoothly oscillate between two worlds, and two Holy Lands, with one lens.
After immigrating from Poland to the British Mandate of... More > | CommonCrawl |
Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology | springerprofessional.de Skip to main content
vorheriger Artikel Research and Development Trend of Shape Control...
25.07.2017 | Review | Ausgabe 5/2017 Open Access
Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology
Zu-De Zhou, Lin Gui, Yue-Gang Tan, Ming-Yao Liu, Yi Liu, Rui-Ya Li
Supported by National Natural Science Foundation of China (Grant No. 51475343), and International Science and Technology Cooperation Program of China (Grant No. 2015DFA70340).
The history of the study of the machine tool thermal error is close to a century long. There is still no solution to the thermal error problem with modern high precision CNC machine tools. Most research about the machine tool thermal error has focused on establishing the relationship between the temperature field and thermal error of machine tools, but no solutions have presented themselves well in industry application. Since there have been no new technological breakthroughs in the experimental studies on the thermal error, traditional electrical testing and laser measurement technology are commonly used. The research objects in thermal error testing are usually small and medium-sized CNC machine tools. There is relatively less research on heavy-duty CNC machine tools.
Heavy-duty CNC machine tools are pivotal pieces of equipment in many advanced manufacturing industries, such as aerospace, energy, petrochemicals, rail transport, shipbuilding, and ocean engineering. They are widely used in the machining of large parts and high-end equipment, such as steam turbines, large nuclear pumps, marine propellers, and large aircraft wings [ 1 ]. Improving the machining precision of heavy-duty CNC machine tools is of great significance to comprehensively improving the efficiency of steam turbine units, extending the life of the nuclear power shaft system, reducing the noise of submarine propulsion, reducing the resistance of flight, and so on. The machining error of heavy-duty CNC machine tools can be classified into five parts:
Geometric errors produced by machine parts' manufacturing and assembly;
Thermal-induced deformation errors caused by internal and external heat sources;
Force-induced deformation errors caused by the cutting force, clamping force, machine tool's own gravity, etc.;
Control errors caused by issues such as the response lag, positioning detection error of the servo system, CNC interpolation algorithm, etc.;
Tool wear and the high frequency flutter of the machine tool.
The proportion of the thermal deformation error is often the largest for high precision CNC machine tools. In precision manufacturing, the thermal deformation error accounts for about 40% – 70% of the total machining errors [ 2 ]. In 1933, the influence of heat on precision part processing was noticed for the first time [ 3 ]. A number of qualitative analysis and contrast tests were carried out between the 1930s and the 1960s. Until the 1970s, researchers used the finite element method (FEM) for machine tool thermal deformation calculations and the optimization of the design of machine tools. The CNC thermal error compensation technology appeared in the late 1970s. After the 1990s, thermal error compensation technology rapidly developed, and many research institutions conducted in-depth studies on the thermal error compensation technology of CNC machine tools based on temperature measurements [ 4 – 14 ].
As shown in Figure 1, the ideology of real-time compensation of thermal error for CNC machine tool consists of two steps. First, extensive experiments are carried out on the CNC machine tools, that collect the CNC data, body temperature of the machine tool, ambient temperature, and the thermal error of the cutting tool tip, in order to establish the thermal error prediction models that are always the multiple linear regression (MLR) model, artificial neural network(ANN) model, and genetic algorithm(GA) model, etc.(shown in Figure 1(a)). Then, the established thermal error prediction model is applied on the CNC machine tool to make error compensation at the tool center point (TCP) through the real-time CNC data and temperature data (shown in Figure 1(b)).
Ideology of real-time compensation of thermal error for CNC machine tool
Over the past few decades, the International Organization for Standardization (ISO) promulgated a series of standards: ISO 230-3 (thermal deformation of the machine tool) [ 15 ], ISO 10791-10 (thermal deformation of the machining center) [ 16 ], and ISO 13041-8 (thermal distortion of the turning center) [ 17 ]. These standards provide systemic analysis methods for machine tool thermal behavior.
Compared to small and medium-sized CNC machine tools, heavy-duty CNC machine tools have unique structural and thermal characteristics, including the following items:
Larger and heavier moving parts, like the spindle box, moving beam, and moving workbench;
Larger and more complex support structures, such as the machine tool base, column, and beam;
More decentralized internal heat sources in 3-D (3-dimensional) space;
Greater susceptibility to environmental temperature shifts.
As the temperature varies over time and the moving parts are heavy, the thermal and mechanical errors exist a strong coupling effect, making the thermal deformation mechanism more complicated and the optimization of the structural design more difficult. As heavy-duty CNC machine tools are more susceptible to environmental temperature shifts (due to the large volume, small changes of environmental temperature can cause noteworthy accumulations of thermal expansion of the machine tool structure in 3-D space), the robustness of the thermal error prediction model of heavy-duty CNC machine tools is more difficult to control.
Monitoring technologies related to the thermal error study of heavy-duty CNC machine tools are important foundation of the research on the machine tool thermal error mechanism and the establishment of a thermal error prediction model. These monitoring technologies include the temperature field monitoring technologies and the thermal deformation monitoring technologies. Further, the thermal deformation monitoring consists of the position error monitoring of the cutting tool tip and the thermal deformation field monitoring of the large structural parts of the machine tool. Because of the unique structural and thermal characteristics of the heavy-duty CNC machine tools mentioned above, there are lots of differences between the heavy-duty machine tool and other machine tools for thermal error monitoring, which can be concluded in three aspects:
In terms of temperature field monitoring, as heavy-duty CNC machine tools have a large volume and dispersive heat sources, more temperature measuring points are needed in order to establish an accurate temperature field distribution. Additionally, the installation positions of temperature sensors are more difficult to determine, and the optimization of the temperature measuring points is more complex;
In terms of thermal deformation monitoring, there are lots of similarity in position error monitoring of the cutting tool tip between the heavy-duty machine tool and other machine tools. However, for thermal deformation field monitoring of the large structural parts, the Heavy-duty CNC machine tools face greater challenges. The existing machine tool deformation detection techniques are mostly based on the displacement detection instruments, which only detect one point or a few points' displacement of the machine tool structure. These methods estimate the deformation by the interpolation method. As the structural parts of the heavy-duty CNC machine tool are larger, more conventional displacement sensors or displacement measurement instruments with wide measurement range in the space are needed to reconstruct the whole thermal deformation of the structures. Additionally, as the moving parts of the heavy-duty CNC machine tool are rather heavier than the small and medium-sized CNC machine tools, when the machine tool works, the sedimentation deformation and vibration of the reinforced concrete foundation is more serious and intractable, which reduces the displacement measurement accuracy directly;
The processing environment of heavy-duty CNC machine tool is generally worse than the small and medium-sized CNC machine tools. Traditional electric sensors can be easily influenced by the work environment. Humidity, dust, oil pollution, and electromagnetic interference all reduce the sensors' performance stability and reliability. The long-term thermal error monitoring of the heavy-duty CNC machine tools requires better environmental adaptability and higher reliability to the related sensors.
In order to solve the thermal issues of heavy-duty CNC machine tools, we need to analyze the causes of thermal error of machine tool and then carry out in-depth study on the thermal deformation mechanism based on the existing theory and thermal deformation detection technology. In addition, we need to conclude the existing monitoring technologies and provides new technical support for thermal error research on heavy-duty CNC machine tools.
Currently, there are many review literatures on the thermal error of CNC machine tools [ 2 , 18 – 26 ], but these papers mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper focuses on the study of thermal error of the heavy-duty CNC machine tool and emphasizes on its thermal error monitoring technology. First, the causes of thermal error of the heavy-duty CNC machine tool are discussed in Section 2, where the heat generation in the spindle and feed system and the environmental temperature influence are introduced. Then, the temperature monitoring technology and thermal deformation monitoring technology are reviewed in detail in Sections 3 and 4, respectively. Finally, in Section 5, the application of the new optical measurement technology, the "fiber Bragg grating distributed sensing technology" for heavy-duty CNC machine tools is discussed. This technology is an intelligent sensing and monitoring system for heavy-duty CNC machine tools and opens up new areas of research on the heavy-duty CNC machine tool thermal error.
2 Causes of Thermal Error of Heavy-Duty CNC Machine Tool
2.1 Classification of the Heat Sources
The fundamental causes of thermal error of the heavy-duty CNC machine tool are related to the internal and external heat sources.
Internal heat sources
Heat generated from friction in the spindle, ball screws, gearbox, guides, and other machine parts; Heat generated from the cutting process; Heat generated from energy loss in the motors, electric circuits, and hydraulic system; Cooling influences provided by the various cooling systems.
External heat sources
Environmental temperature variation; Thermal radiation from the sun and other light sources.
For the internal heat sources, the heat generated from the spindle and ball screws has a significant influence on the heavy-duty CNC machine tools and appears frequently in the literatures. The heating mechanism, thermal distribution, and thermal-induced deformation are often researched by theoretical and experimental methods. For the external heat sources, the dynamic change regularity of the environmental temperature and its individual influence and combined effects with internal heat sources on thermal error of heavy-duty CNC machine tools are studied.
2.2 Heat Generated in the Spindle
2.2.1 Thermal Model of the Supporting Bearing
The thermogenesis of the rolling bearings, namely the rolling bearing power loss N f is generally calculated by taking the rolling bearing as a whole. It is the scalar product of the rolling bearing friction torque M f and angular velocity of the inner ring of the bearing.
$$N_{\text{f}} = M_{\text{f}} \cdot \pi \cdot n_{i} /30{\kern 1pt} {\kern 1pt},$$
where n i is the rotating speed of the spindle in Eq. ( 1).
Palmgren [ 27 , 28 ] developed the experiential formula for the rolling bearing friction torque M f based on experimental tests:
$$M_{\text{f}} = M_{\text{l}} + M_{\text{v}} ,$$
$$M_{\text{l}} = f_{1} P_{1} D_{\text{m}} ,$$
$$M_{\text{v}} = \left\{ \begin{aligned} &10^{3} (vn_{i} )^{{\frac{2}{3}D_{\text{m}}^{3} }} ,\quad vn_{i} \ge 2 \times 10^{ - 3} , \hfill \\ &16f_{0} D_{\text{m}}^{3} {\kern 1pt} {\kern 1pt} ,\quad \quad vn_{i} < 2 \times 10^{ - 3} , \hfill \\ \end{aligned} \right.$$
where M l and M v are the load friction torque and viscous friction torque respectively in Eq. ( 2), D m is the pitch diameter of the bearing, and v is the kinematic viscosity of the lubricating oil. f 0, f 1 and P 1 are related to the bearing type in Eqs. ( 3) and ( 4). Atridage [ 29 ] modified Palmgren's equation to take the effect of the lubricating oil flow into consideration. Stein and Tu [ 30 ] modified Palmgren's equation to consider the effect of the induced thermal preload.
The above models calculate the thermogenesis value of the rolling bearing as a whole, and they do not involve the the surface friction power loss calculation of the inner concrete components of the bearing. Rumbarger, et al. [ 31 ], used the established fluid traction torque model to calculate the friction power loss of the bearing roller, cage, and inner and outer ring raceway respectively. But their model ignored the heating mechanism differences between the local heat sources. Chen, et al. [ 32 ], calculated the total thermogenesis of the bearing from the local heat sources with different heating mechanisms. Moorthy and Raja [ 33 ] calculated the thermogenesis value of the local heat sources, they also took into consideration the change in the diametral clearance after the assembly and during operation that was attributed to the thermal expansion of the bearing parts, which influenced the gyroscopic and spinning moments contributing to the heat generation. Hannon [ 34 ] detailed the existing thermal models for the rolling-element bearing.
2.2.2 Thermal Distribution in the Spindle
When studying the temperature field distribution of the spindle, the cause-and-effect model and power flow model should first be analyzed, to determine the heat sources and heat transfer network. Then the heat transfer parameters should be determined, including heat transfer coefficients of the materials, thermal contact resistances between the contact surface, and heat transfer film coefficients. The heat transfer coefficients are easier obtained relatively. The thermal contact resistances between the contact surfaces are concerned with the surface roughness and contact force, and are often obtained by experimental methods [ 35 ]. Heat convection within the housing is the most difficult to describe, so a rough approximation is often used for the heat transfer film coefficient [ 28 ].
$$h_{\text{v}} = 0.0332kPr^{1/3} \left( {\frac{{u_{\text{s}} }}{{v_{0} x}}} \right)^{1/2} ,$$
where u s equals the bearing cage surface velocity, x equals the bearing pith diameter, v 0 represents the kinematic viscosity and Pr is the Prandtl number of the oil in Eq. ( 5). It is important to note that for different heat convection objects, the transfer film coefficients have different expression formulas.
Bossmanns and Tu [ 36 , 37 ] illustrated the detailed causes and effects of the spindle variables (shown in Figure 2), presented the power flow model (shown in Figure 3), and developed a finite difference thermal model to characterize the power distribution of a high speed motorized spindle. The heat transfer coefficients, thermal contact resistances, and heat convection coefficients were all calculated in their analytical method in detail. More research about the thermal resistance network of the bearing assembly can be found in Refs. [ 38 , 39 ].
Cause-and-effect model for spindle [ 36 ]
Finite element spindle model marked with the heat sources and sinks [ 37 ]
As the real structure of a spindle box is complicated, the finite difference method (FDM) and FEM are often preferred to obtain accurate results. Jedrzejewski, et al. [ 40 ], set up a thermal analysis model of a high precision CNC machining center spindle box using a combination of the FEM and the FDM. Refs. [ 41 , 42 ] created an axially symmetric model for a single shaft system with one pair of bearings using the FEM to estimate the temperature distribution of the whole spindle system.
2.3 Heat Generated in the Feed Screw Nuts
The thermal deformation of the feed screw nuts effects the linear position error of heavy-duty machine tools. The axial thermal errors become bigger as the runtime of the feed system increases. However, after running for a period of time, the feed system approaches the thermal balance and reaches the approximation steady state, and the variations of thermal errors ease. The variation of radial thermal expansion of the feed screw nuts is so minor that it may be ignored [ 43 ]. In the ball screw system, the heat generation sources are the nuts and 2 bearings, and the heat loss sources are liquid cooling and surface convection (shown in Figure 4) [ 44 , 45 ]. The thermal balance equation can be expressed by
$$Q_{{{\text{b}}1}} + Q_{{{\text{b}}2}} + Q_{\text{n}} - Q_{\text{sc}} - Q_{\text{c}} = \rho cV\frac{\partial T}{\partial t},$$
where Q b1 and Q b2 are the conduction heat from the 2 support bearings, Q n is the conduction heat from the nut, Q sc is the convection heat from rotation of the ball screw shaft, and Q c is the convection heat lost from the cooling liquid. The material density is noted as ρ, c is the specific heat, V is the volume, T is the temperature, and t is the time.
Schematic diagram of the screw nuts thermal model
The conduction heat from the 2 support bearings, Q b1 and Q b2 can be caculated as shown in section 2.2. The conduction heat from the nut Q sc can be defined as below:
$$Q_{\text{sc}} = 0.12\pi f_{{ 0 {\text{n}}}} v_{{ 0 {\text{n}}}} n_{\text{n}} M_{\text{n}} ,$$
where f 0n is a factor related to the nut type and method of lubrication, v 0n is the kinematic viscosity of the lubricant, n n is the screw rotation velocity, and M n is the total frictional torque of the nut (preload and dynamic load) [ 44 ].
Mayr, et al. [ 46 ], established the equivalent thermal network model of the ball screw with an analytical method. Xu, et al. [ 44 , 47 ], discovered that, in the case of a large stroke, the heat produced by the moving nut was dispersed on a larger scale than in other cases, so the screw cooling method has better deformation performance than the nut cooling method. Conversely, in the case of a small stroke, the thermal deformation performance of the nut cooling method is better than that of the screw cooling method. Some researchers [ 4 , 48 , 49 ] developed the FEM model for the screw, in which the strength of the heat source measured by the temperature sensors was applied to the FEM model to calculate the thermal errors of the feed drive system. Jin, et al. [ 50 – 52 ], presented an analytical method to calculate the heat generation rate of a ball bearing in the ball screw/nut system with respect to the rotational speed and load applied to the feed system.
2.4 Environmental Temperature Effects
Environmental temperature fluctuation changes the temperature of the heavy-duty CNC machine tool globally, and affects its machining accuracy greater compared with the small and medium-sized machine tools [ 1 ]. Environmental temperature fluctuations have daily periodicity and seasonal periodicity simultaneously. Tan, et al. [ 53 ], decomposed the environmental temperature fluctuations into Fourier series form (shown in Figure 5). x represents time in minutes. The basic angular frequency is ω 0 =2π/ T 0, where T 0=1440 min. A 0 is the average value of the daily cycle temperature that the current temperature belongs to, and it can be obtained from the temperature history through time series analysis. A n represents the amplitude of the temperature fluctuation for each order, and the orders are multiples of the basic frequency ω 0. ϕ n is the initial phase of each order. Tan, et al's experiment verifies that the environmental temperature has a significant impact on the thermal error of the heavy-duty CNC machine tool, and there exists hysteresis time between the environmental temperature and the corresponding thermal deformation that changes with climate and seasonal weather.
Time-frequency characteristics of environmental temperature [ 53 ]
Zhang, et al. [ 54 ], established the thermal error transfer function of each object of the machine tool based on the heat transfer mechanism. Then, based on the assembly dimension chain principle, the thermal error transfer function of the whole machine tool was obtained. As the thermal error transfer function can be deduced using Laplace transform, the thermal error characteristic of the machine tool can be studied with both time domain and frequency domain methods. Taking the environmental temperature fluctuations as input, based on the thermal error transfer function, the environmental temperature induced thermal error can be obtained.
2.5 Thermal Analysis of the Global Machine Tool
The heat generated in the spindle and feed screw nuts is discussed in Sections 2.1 and 2.2. The global thermal deformation of heavy-duty machine tools is influenced by varieties of heat sources as noted at the beginning of Section 2. The FEM was utilized for the thermal analysis of the global machine tool. Mian, et al. [ 55 ], presented a novel offline technique using finite element analysis (FEA) to simulate the effects of the major internal heat sources, such as bearings, motors and belt drives, and the effects of the ambient temperature variation during the machine's operation. For this FEA model, the thermal boundary conditions were tested using 71 temperature sensors. To ensure the accuracy of the results, experiments were conducted to obtain the thermal contact conductance values. Mian, et al. [ 56 ], further studied the influence of the ambiance temperature variation on the deformation of the machine tool using FEM. The validation work was carried out over a period of more than a year to establish the robustness to the seasonal changes and daily changes in order to improve the accuracy of the thermal simulation of machine tools. Zhang, et al. [ 57 ], proposed a whole-machine temperature field and thermal deformation modeling with a simulation method for vertical machining centers. Mayr, et al. [ 58 – 60 ], combined the advantages of FDM and FEA to simulate the thermo-mechanical behavior of machine tools (shown in Figure 6).
Schematic of the FDEM [ 58 ]
The transient 3-D temperature distribution at discrete points of time during the simulated period was calculated using the FDM. Those were then used as temperature field for the FEM to calculate the thermally induced deformations.
3 Temperature Field Monitoring Technology for Heavy-Duty CNC Machine Tools
The formation process of thermal errors in heavy-duty CNC machine tools occurs in the following steps: heat sources→temperature field→thermal deformation field→thermal error. It is obvious that the relationship between the thermal deformation field and thermal error is more relevant than the relationship between the temperature field and thermal error. However, it is quite difficult to measure the micro-thermal-deformation of the whole machine structure directly and the surface temperature of the machine tool is easier to obtain inversely. Existing thermal error prediction models are mostly based on the temperature measurement from the surface of the machine tool, establishing the relationship between the thermal drift of the cutting tool tip and the temperature at critical measuring point. Therefore, the temperature monitoring of the machine tool is a key technology in the thermal error research of CNC machine tools. It can be divided into the contact-type temperature measurements and non-contact temperature measurements, according to the installation form of the temperature sensor.
3.1 Contact-Type Temperature Measurement of Heavy-Duty CNC Machine Tools
The contact-type surface temperature measurement sensors used in the temperature monitoring of CNC machine tools are mainly thermocouples and platinum resistance temperature detector (RTD). Their installation can be divided into the paste-type, pad-type, and screw-type. Thermocouples and platinum resistance temperature detectors are mostly used for discrete surface temperature measurement. Heavy-duty CNC machine tools have a large volume and decentralized internal heat sources.
Delbressine, et al. [ 61 ], realized that it was difficult to determine the locations and qualities of the temperature sensors, so numerous temperature sensors should be arranged on the surface of the machine tools. Mian, et al. [ 55 ], used 65 temperature sensors to measure the detailed temperature gradient caused by the internal heat sources, and applied them to FEM. Zhang, et al. [ 57 ], used 32 platinum resistance temperature sensors to establish a machine tool temperature field. In 2014, Tan, et al. [ 53 ], installed 33 temperature sensors on the heavy-duty gantry type machine tool XK2650 (shown in Figure 7), and established a thermal error prediction model considered the influence of the environmental temperature. This model can predict 85% of the thermal error and has a good robustness. In addition, Refs. [ 62 – 68 ] used the thermal resistance to measure the machine tool surface temperature, and Refs. [ 69 , 70 ] used the thermocouples. Werschmoeller and Li [ 71 ] embedded 10 mini flaky thermocouples into the cutting tool to monitor the cutting tool temperature field. Liu, et al. [ 72 ], embedded thermocouples into the workpiece to investigate the workpiece temperature variations that resulted from helical milling.
Temperature monitoring of the heavy-duty gantry type machine tool XK2650 [ 53 ]
These electrical temperature-sensing technologies mainly utilize the linear relationship between the values of the potential, resistance, or other electrical parameters of the sensing materials and temperature to detect the temperature. They have a simple structure, fast response capability, high sensitivity, and good stability. So they play an important role in the thermal error experimental research of heavy-duty CNC machine tools. However, there are some common flaws in the electrical temperature measurement sensors. These include the following:
Poor environmental adaptability
Many parts of the heavy-duty CNC machine tools are in environments exposed to oil, metal cutting chip dust, and coolant. The wires and sensitive components of electrical temperature sensors are all made from metal materials, which are susceptible to corrosion and damage, and have a short working life in a relatively harsh environment.
Weak ability to resist electromagnetic interference
Heavy-duty CNC machine tools have many inductance elements, like the motors and electric control cabinet, which form strong time-varying electromagnetic fields. The testing signals of electrical temperature sensors are easily interfered with electromagnetic fields during transmission, reducing the signal-to-noise ratio (SNR), accuracy and reliability of the test data.
Wide variety of signal transmission wires
The principle of electrical temperature sensors is that of an electrically closed circuit. A single electrical temperature sensor has two conductor wires, and a plurality of electrical sensors cannot be connected in a series connection. If there are N electrical sensors, there are 2 N wires. So it is difficult to create the layout of large amounts of wires in heavy-duty CNC machine tools.
The testing results for the above electrical temperature sensors show the discrete-point temperature of the heavy-duty CNC machine tool's surface. The whole temperature field can be reconstructed by using the FDM. Due to the use of few discrete temperature points, it is difficult to establish an accurate integral temperature field of a heavy-duty CNC machine tool, particularly to calculate its internal temperature. Currently, prediction models such as multiple regression or neural networks, are all established based on discrete temperature points, so there is little research on the integral temperature field reconstruction of heavy-duty CNC machine tools. However, it is of great significance for the study of the thermal error mechanism to obtain the 3-D temperature field of CNC machine tools.
3.2 Non-Contact Type Temperature Measurement of Heavy-Duty CNC Machine Tools
Currently, infrared thermal imaging technology is a non-contact type temperature measurement method that is often applied to thermal error study of heavy-duty CNC machine tools, and it is part of the radiation temperature measurement method.
A thermal infrared imager gathers infrared radiant energy and delivered it to an infrared detector through the optical system, in order to process the infrared thermal image. Using the thermal infrared imager test results, one can select the key temperature points to establish a thermal error model. Qiu, et al. [ 73 ], measured the spindle box temperature field through FLIR thermal imager, and selected 18 temperature points symmetrically to establish the model of the spindle thermal components using the multiple linear regression method. Infrared thermal imaging is suitable for the study of the thermal characteristics of key parts of the heavy-duty CNC machine tools as it visualizes the global temperature field of the surface with a high temperature resolution.
Wu, et al. [ 74 ], researched the thermal behaviors of the support bearing (shown in Figure 8) and screw nut of the ball screws (shown in Figure 9) by using infrared thermographs. Uhlmann and Hu [ 75 ], captured the temperature field when the spindle running was at 15000 r/min for 150 min, and compared their data with emulational temperature fields (shown in Figure 10). Xu, et al. [ 47 ], examined the heat generation and conduction of the ball screws and investigated how the different cooling methods affects the temperature distribution using infrared thermal imaging technology. Zhang, et al. [ 76 ], studied temperature variable optimization for precision machine tool thermal error compensation using infrared thermometer.
Thermal imaging of the screw support bearings [ 74 ]
Thermal imaging of the screw nut [ 74 ]
Thermal imaging of the headstock temperature field [ 75 ]
The infrared thermal imager can visualize the temperature distribution of CNC machine tools, and plays an important role in thermal error study of CNC machine tools. However, the infrared thermal imager is a two-dimensional plane imaging infrared system. One infrared thermal imager cannot measure the overall global temperature field of heavy-duty CNC machine tools. Even with the use of multiple expensive infrared cameras for measuring the global temperature field of heavy-duty CNC machine tools, it is still difficult to track the temperature field of the moving parts when heavy-duty CNC machine tools are involved in actual processing.
The shortcomings mentioned in Section 3.1 and Section 3.2 limit the electrical temperature sensors and infrared sensing technology for monitoring the real-time temperature over the long-term in heavy-duty CNC machine tools. There must be some breakthroughs in the temperature field measurement of heavy-duty CNC machine tools in order to develop a highly intelligent temperature measurement and thermal error compensation system that is suitable for heavy-duty CNC machine tools for commercialization.
4 Thermal Deformation Monitoring Technology for Heavy-Duty CNC Machine Tools
4.1 Thermal Error Monitoring of the Cutting Tool Tip
For the thermal error detecting of the cutting tool tip, three categories of sensors are mainly used, that are non-contact displacement detection sensors, high precision double ball gauge, and laser interferometer. The non-contact displacement detection sensors utilized in machine tools include eddy current transducers, capacitive transducers and laser displacement sensors. Though their sensing principles are different, their installation and error detection method are consistent with each other. The high precision double ball gauge and laser interferometer are mainly used to detect the dynamic geometric error of the machine tool, and they can also be competent at thermal error detecting.
4.1.1 Five-Point Detection Method
The five-point detection method (shown in Figure 11) is only applicable to monitoring the thermal error of a machine tool when the spindle box is not moving. It detects the thermal deformation caused by the ambient temperature or by the rotation of the spindle in ISO230-3. This method measures the three position errors δ px , δ py , and δ pz in the X, Y, and Z direction and two angle errors ε px and ε py rotating around the X and Y axes of the tool cutting tip. Their values can be calculated by Eq. ( 8):
$$\left\{ \begin{aligned} \delta_{{{\text{p}}x}} = \delta_{x1} + L \times \varepsilon_{{{\text{p}}x}} , \hfill \\ \delta_{{{\text{p}}y}} = \delta_{y1} + L \times \varepsilon_{{{\text{p}}y}} , \hfill \\ \delta_{\text{pz}} = \delta_{z} {\kern 1pt} , \hfill \\ \varepsilon_{{{\text{p}}x}} = (\delta_{y1} - \delta_{y2} )/d, \hfill \\ \varepsilon_{{{\text{p}}y}} = (\delta_{x1} - \delta_{x2} )/d, \hfill \\ \end{aligned} \right.$$
where δ x1, δ x2, δ y1, δ y2, and δ z are the displacements detected by the displacement sensors S X1, S X2, S Y1, S Y2, and S Z . d is the sensor distance between sensor S X1 and sensor S X2, and L represents the effective length of the test mandrel that is often made by steel, or invar alloy to oatain higher testing precision.
Five-point detection method
As the test mandrel is a cylinder, a shift in one direction will cause a test error in the other direction. Using δ x1 and δ y1 for an example (shown in Figure 12), when the cutting tool tip moves from point O to point O' in the XOY plane, the real position errors are δ x1 and δ y1. However, the position errors detected by the displacement sensors are δ′ x1 and δ′ y1, correspondingly. The relationship between them can be expressed by Eq. ( 9):
$$\left\{ \begin{aligned} \delta_{x1}^{2} + (R - \delta^{\prime}_{y1} + \delta_{y1} )^{2} = R^{2} , \hfill \\ \delta_{y1}^{2} + (R - \delta^{\prime}_{x1} + \delta_{x1} )^{2} = R^{2} , \hfill \\ \end{aligned} \right.$$
where R represents the radius of the test mandrel. The real position errors δ x1 and δ y1 can be expressed by the displacement sensors's detected data. It has to be noted that δ px , δ py , δ pz , ε px , and ε py interact with each other, and Eq. ( 9) does not consider ε px and ε py .
Test error of five-point detection method
4.1.2 High Precision Double Ball Gauge Method
A double ball gauge consists of two precision metal spheres and a telescoping bar installed on a grating ruler that can detect the displacement (shown in Figure 13). The double ball gauge method is recommended in the ASME B5.54 [ 77 ] to detect comprehensive error in machine tools. The advantage of this method is that it can detect the tool tip trajectory error caused by the geometric error and thermal deformation. However, the heat expansion and bending deformation of the telescoping bar or a small displacement of the stand affect the test accuracy of this method.
Double ball gauge method
4.1.3 Laser Measurement Method
The laser interferometer instrument utilizes the Doppler effect caused by the frequency shift to detect the machine tool's linear position error (shown in Figure 14) and angle error (shown in Figure 15) moving along the guide [ 78 ]. A dual frequency laser interferometer is a heterodyne interferometer based on single frequency laser interferometers. It has a large gain and high SNR and it is especially suitable for measuring the thermal error of heavy-duty CNC machine tools. The laser measurement methods are widely used for geometric accuracy calibration and the heat-induced position error, angle error and straightness error of heavy-duty CNC machine tools. Ruiz, et al. [ 79 ], designed a set of optical measuring systems based on the laser interference principle to track and locate the tool tip of the machine tool.
Tool tip's linear position error measurement
Tool tip's angle error measurement
4.2 Thermal Deformation Monitoring of Large Structural Parts of the Machine Tool
Currently, the displacement detection apparatus, which detects the deformation of large structural parts of heavy-duty CNC machine tools is based on the laser displacement sensors, eddy current sensors, and capacitive sensors. For instance, Gomez-Acedo, et al. [ 80 ], utilized the inductive sensors array to measure the thermal deformation of a large gantry-type machine tools(shown in Figure 16). Additionally, the laser interferometer with different accessories can measure a range of values, including the precision position, straightness, verticality, yaw angle, parallelism, flatness, and turntable accuracy, and it plays an important role in the detection of the thermal deformation of a heavy-duty CNC machine tool. However, since some of these instruments mentioned above are very large, or in demanding environments, or have small measurement range, it is difficult to engage in the long-term monitoring of heavy-duty CNC machine tools. The direct measurement method requires the installation of displacement sensors on a fixed base as a benchmark, but for CNC machine tools, especially heavy-duty CNC machine tools, it is difficult to find a large constant benchmark (any large base will incur thermal deformation or force-induced deformation, both of which affect the measurement accuracy). It is difficult for the direct displacement measurement method to completely reconstruct the real-time deformation of heavy-duty CNC machine tools. Therefore, researchers are trying to find a more reliable and practical measuring principal and method to monitor the deformation of heavy-duty CNC machine tool structures using laser interferometer [ 81 ].
Thermal deformation detection of a heavy-duty CNC machine tool based on the inductive sensors array [ 80 ]
5 Application of FBG Sensors in Heavy-Duty CNC Machine Tools
5.1 Principle and Characteristics of Fiber Bragg Grating Sensors
A fiber Bragg grating sensor is a type of optical sensitive sensor that has been utilized and studied for nearly forty years. A fiber Bragg grating sensor has a number of unparalleled characteristics. It is small and explosion-proof, has electrical insulation, and is immune to electromagnetic interference. It offers high precision, and high reliability. Multiple FBG sensors can be arranged in one single fiber. Therefore, it has been widely used in many engineering fields and mechanical system [ 82 ].
The sensing principle of a FBG is fundamentally based on a periodic perturbation of the refractive index along the fiber axis formed by exposing the fiber core to the illumination of an intense ultraviolet interference pattern. When a broad-band light propagates along the optical fiber to a grating, a single wavelength is reflected back while therest of the signal is transmitted with a small attenuation (shown in Figure 17). The reflected wavelength is the Bragg wavelength and it can be expressed by the following equation:
Distributed detection principle of the fiber Bragg grating sensors
$$\lambda_{\text{B}} = 2n_{\text{eff}} \Lambda,$$
where λ B is the Bragg wavelength, n eff is the effective refractive index of the fiber core, and Λ is the grating period. A FBG shows great sensitivity to various external perturbations, especially strain and temperature. Any change of stain or temperature will cause the change of n eff or Λ, and lead to the shift of λ B. Hence, by monitoring the Bragg wavelength shift, the value of the strain or temperature is determined.
The wavelength variation response to the axial strain change Δ ε and temperature change Δ T is given by:
$$\frac{{\Delta \lambda_{\text{B}} }}{{\lambda_{\text{B}} }} = (1 - p_{\text{e}} )\Delta \varepsilon + (\alpha_{\text{f}} + \zeta )\Delta T{,}$$
where p e, α f, and ζ are, respectively, the effective photoelastic coefficient, thermal expansion coefficient, and thermal-optic coefficient of the fused silica fiber.
In the literatures, there is little application of fiber grating sensors in the manufacturing industry and almost nothing concerning the machine tool temperature detection and thermal error monitoring. The detection technology based on fiber Bragg grating sensing is especially suitable for the thermal error monitoring of heavy-duty CNC machine tools. It offers a number of advantages over traditional detection technologies, as shown follows:
A fiber Bragg grating sensor has a small volume, light weight, and high measurement precision. It is especially unparalleled when a series of FBG sensors that detect a variety of physical parameters distribute in a single fiber. It is suitable for heavy-duty CNC machine tool's large volume, multiple heat sources, and complex structure.
A fiber Bragg grating sensor is highly resistant to corrosion, and high temperature. It is especially suitable for the processing under conditions of high temperature, high humidity, excessive vibration, dust, and other harsh environment. It meets the requirements of the long-term stability and reliability for machine tool detection.
A fiber Bragg grating sensor has electrical insulation, and is immune to electromagnetic interference (EMI), making it suitable for harsh processing conditions of the heavy-duty CNC machine tool. It can achieve accurate measurement of the thermal error of machine tools.
5.2 Temperature Field Monitoring of Heavy-Duty CNC Machine Tool Based on Fiber Bragg Grating Sensors
5.2.1 Fiber Bragg Grating Temperature Sensors for the Surface Temperature Measurement of Machine Tools
The fiber Bragg grating temperature measurement technology has become more mature, but the research in this field is mainly concentrated on the extremely high or low temperature measurement and the temperature sensitive enhancing technology. Currently, fiber Bragg grating temperature sensors can be divided into 5 parts by the form of packaging: tube-type fiber Bragg grating temperature sensor [ 83 , 84 ], substrate-type fiber Bragg grating temperature sensor [ 85 , 86 ], polymer packaged fiber Bragg grating temperature sensor [ 87 ], metal-coated fiber Bragg grating temperature sensor [ 88 – 93 ], and sensitization-type fiber Bragg grating temperature sensor [ 94 ].
In order to easily install the temperature sensor and not destroy the internal structure of the machine tools, the temperature of the surface of the machine tools is usually tested and used as the basic data for the thermal error compensation. Measuring the surface temperature accurately in the high gradient temperature field is an challenging technical problem. In the present study, the traditional electrometric method for measuring the temperature of the surface of the machine tool rarely considers the precision problem of measurement. Fiber Bragg grating has been widely used in the field of temperature measurement, but there is little research on the measurement error of the surface temperature measurement. The machine tool surface temperature measurement error can be divided into 3 parts:
When the temperature sensor's surface makes contact with the machine tool, the heat flow will be more concentrated at the testing point. It results in temperature measurement error Δ T 1.
The thermal contact resistance between a temperature sensor's surface and machine tool surface results in a temperature drop Δ T 2.
There is a certain distance between the temperature sensor's sensing point and the surface of the machine tool, which creates the temperature measurement error Δ T 3.
Optical fibers are mainly made of quartz and organic resin material. Their thermal conductivity is less than the metal material wire of the thermocouple and thermal resistance. The first temperature measurement error Δ T 1 of the fiber Bragg grating is significantly smaller than that of the latter two, and the main error factors are the thermal contact resistance and the distance of the temperature sensing point. A high gradient temperature field model of the heating surface is established by the FEM [ 95 ]. In this model, when the hot surface temperature and the air temperature are 90.2 °C and 22 °C, the temperature falling gradient near the hot surface is −46.4 °C/mm (shown in Figure 18).
Temperature gradient distribution of surface temperature measurement [ 95 ]
Due to the existence of the coating layer on the surface of the fiber Bragg grating sensor, there is about a 0.15 mm gap between the machine tool surface and the fiber Bragg grating temperature sensing point. In the temperature gradient of −46.4 °C/mm, the small space is sufficient to produce a large temperature test error. By using thermal conductive paste, the uniformity of the surface temperature can be improved, and the error of the surface temperature measurement by FBG can be significantly reduced compared to that from a commercial thermal resistance surface temperature sensor. Ref. [ 96 ] studied influence of the installation types on surface temperature measurement by a FBG sensor. The surface temperature measurement error of the FBG sensor with single-ended fixation, double-ended fixation and fully-adhered fixation are theoretical analyzed and experimental studied. The single-ended fixation results in a positive linear error with increasing surface temperature, while the double-ended fixation and fully-adhered fixation both result in non-linear error with increasing surface temperature that are affected by thermal expansion strain of the tested surface's material. Due to its linear error and strain-resistant characteristics, the single-ended fixation will play an important role in the FBG surface temperature sensor encapsulation design field .
5.2.2 Temperature Measurement of the Machine Tool Spindle Bearing Based on the Fiber Bragg Grating Sensors
The spindle is the core component with complex assembly mechanical structure in heavy-duty CNC machine tool. The spindle consists of the rotating shaft, the front and rear bearings, and the spindle base. For the motorized spindle, it also includes the rotor and stator. As the structure of the spindle is very compact and narrow, to fix the temperature sensor inside the spindle is difficult. The thermogenesis of spindle's front bearings is a research hotspot that has great influence to the thermal error of the heavy-duty CNC machine tool.
Liu, et al. [ 97 ], installed two FBG temperature sensors (position 1 and 3) and four thermal resistors (position 1, 2, 3, and 4) on the bearing support surface of the spindle (shown in Figure 19). With the shaft rotating freely, the temperature rise amplitudes in position 1, 2, 3, and 4 are consistent with each other. The measured temperature by the FBG temperature sensors and the thermal resistors are the same. As the volume of the FBG temperature sensor is rather smaller than the commercial thermal resistors and thermocouples, it has natural advantages to measure the internal temperature of the spindle.
Temperature field measurement of the lathe spindle [ 97 ]
Dong, et al. [ 98 ], embedded six FBGs connected in one fiber into the spindle housing. These FBGs were installed on the outer ring surface of the front bearing equidistantly in the circumferential direction (shown in Figure 20). With the shaft rotating freely or under radius force, the corresponding uniform or non-uniform temperature field of the outer ring was measured. Additionally, based on the testing, the influence of bearing preload on the temperature rise of the bearing was studied.
FBG sensors installation locations [ 98 ]
5.2.3 Thermal Error Measurement of a Heavy-Duty CNC Machine Tool Based on the Fiber Bragg Grating Sensors
Huang, et al. [ 99 ], measured the surface temperature field of a heavy-duty machine tool using the fiber Bragg grating temperature sensor (shown in Figure 21). Three fibers engraved with 27 fiber Bragg grating sensors were arranged on the bed, column, motor, spindle box, and gear box (shown in Figure 22). The temperature was monitored for 24 h. The laser displacement sensors were utilized to measure the offset of the tool cutting tip in the directions of X, Y, and Z. In Figure 23, CH1-10, CH2-1 CH2-7, and CH3-6, show the air temperature changes in the different parts of the environment near to the machine tool. The rest show the temperature in different parts of the structure surface of the machine tool. The parts all had the same change trend, but the temperature of the surrounding environment had an effect on the different parts of the machine tool. There was a large temperature gradient on the surface of the structure. Figure 24 shows the relationship of the thermal drift of the tool tip in three directions and the variation of the environmental temperature and the surface temperature of machine tool. The thermal drift in three directions shifted with environmental temperature and machine tool surface temperature. The thermal drift in the Y direction was the largest. When the ambient temperature shift reached about –6 °C, the error in the direction of Y reached about 15 μm.
Temperature field measurement of a heavy-duty machine tool based on the FBG [ 99 ]
Locations of the FBG temperature sensors [ 99 ]
Diurnal variation of the temperature [ 99 ]
Changes of temperature and the thermal drift [ 99 ]
The fiber Bragg grating has the characteristics of multi-point temperature measurement. It can realize the layout of the temperature measurement points in the large surface area of the heavy-duty CNC machine tool, which can realize the reconstruction of the temperature field of the machine tool more accurately.
5.3 Heavy-Duty CNC Machine Tool Thermal Deformation Monitoring Based on Fiber Bragg Grating Sensors
There have been a number of achievements made in the application of the fiber Bragg grating strain sensor to large structural deformation measurements. By applying the classical beam theory, Kim and Cho [ 100 ] rearranged the formula to estimate the continuous deflection profile by using strains measured directly from several points equipped with the fiber Bragg sensor. Their method can be used to measure the deflection curve of bridges, which represents the global behavior of civil structures [ 101 ]. Kang, et al. [ 102 ], investigated the dynamic structural displacements estimation using the displacement–strain relationship and measured the strain data using fiber Bragg grating. It is confirmed that the structural displacements can be estimated using strain data without displacement measurement. Kang, et al. [ 103 ], presented an integrated monitoring scheme for the maglev guideway deflection using wavelength-division-multiplexing (WDM) based fiber Bragg grating sensors, which can effectively avoid EMI in the maglev guideway. Yi, et al. [ 104 ], proposed a spatial shape reconstruction method using an orthogonal fiber Bragg grating sensor array.
Fiber Bragg grating sensing technology opens up a new area of study for the real-time thermal deformation monitoring of heavy-duty CNC machine tool structures. The earliest work was done by Bosetti, et al. [ 105 – 107 ], who put forward a kind of reticular displacement measurement system (RDMS) based on a reticular array of fiber Bragg strain sensors to realize the real-time monitoring of deformations in the structural components of the machine tools (shown in Figure 25).
Scheme of a Cartesian milling machine equipped with three RDMSs (typical column height of about 4m) [ 105 ]
For a planar and isostatic reticular structure (using the numbering conventions shown in Figure 26), the position of the ith node n i = ( x i , y i) can be expressed as a function of the coordinates of the nodes n i−1 and n i−2 and of the length of the 2 connecting beams L 2i−3 and L 2i−4 :
$$\left\{ \begin{aligned} (x_{i} - x_{i - 1} )^{2} + (y_{i} - y_{i - 1} )^{2} = L_{2i - 3}^{2} \hfill \\ (x_{i} - x_{i - 2} )^{2} + (y_{i} - y_{i - 2} )^{2} = L_{2i - 4}^{2}. \hfill \\ \end{aligned} \right.$$
Numbering of the beams and nodes of the lattice [ 105 ]
Figure 27 shows that the bending deformation of the RDMS prototype reconstructed by the measurement system respectively, which shows good consistency. In order to allow for the development of more general and 3-dimensional structures, a new algorithm was proposed [ 105 ]. The problem of calculating the nodal positions on the basis of their distances as measured by the FBG sensors can be reformulated as the a minimization problem.
3-point bending of the RDMS prototype and deformed shape as reconstructed by the measurement system [ 105 ]
Liu, et al. [ 108 ], detected the thermal deformation of the column of a heavy-duty CNC machine tool with the integral method based on the FBG sensor array (shown in Figure 28). The strain data was gauged by multiple FBG sensors glued on the specified locations of the machine tool, and then transformed into the deformation. The displacement of the machine tool spindle was also gauged for evaluation. The calculation results show consistency with the testing results obtained from the laser displacement sensor (shown in Figure 29). Refs. [ 109 , 110 ], studied the deformation measurement of heavy-duty CNC machine tool base using fiber Bragg grating array and designed a FBG-based force transducer for the anchor supporting force measurement of the heavy-duty machine tool base. These research can be extended to the analysis of the thermal deformation mechanism and thermal deformation measurement of heavy-duty CNC machine tool.
3D models and FBG sensor locations
Results calculated by FBG data and results measured by the displacement sensor of the column extension [ 108 ]
6 Conclusions and Outlook
The thermal error compensation technology of CNC machine tools has been developed over decades, but its successful application to commercial machine tools is limited. To some extent, it is still in the laboratory stage. Heavy-duty CNC machine tools play an important role in the national economic development and national defense modernization. However, due to the more complex thermal deformation mechanism and difficulty in the monitoring technology caused by a huge volume, overcoming its thermal error problems is extremely difficult.
The fiber Bragg grating sensing technology opens up a new areas of research for thermal error monitoring of heavy-duty CNC machine tools. We need to take advantage of the fiber Bragg grating sensing technology in global temperature fields and thermal deformation field measurements for heavy-duty CNC machine tool to study the thermal error mechanism of heavy-duty CNC machine tool. These can provide technological support for thermal structure optimization design of heavy-duty CNC machine tools. We also need to improve the thermal error prediction model, especially in regards to the robustness problem.
Intelligent manufacturing is an important trend in manufacturing technology, and the Industry 4.0 promises to create smart factory [ 111 , 112 ]. Intelligent sensing technology is one of the indispensable foundations for the realization of intelligent manufacturing. The fusion of optical fiber sensing technology and high-end manufacturing technology is an important research direction that will play an important role in the Industry 4.0.
Zurück zum Zitat L Uriarte, M Zatarain, D Axinte, et al. Machine tools for large parts. CIRP Annals - Manufacturing Technology, 2013, 62(2): 731–750. L Uriarte, M Zatarain, D Axinte, et al. Machine tools for large parts. CIRP Annals - Manufacturing Technology, 2013, 62(2): 731–750.
Zurück zum Zitat J Bryan. International status of thermal error research. CIRP Annals– Manufacturing Technology, 1990, 39(2): 645–656. J Bryan. International status of thermal error research. CIRP Annals– Manufacturing Technology, 1990, 39(2): 645–656.
Zurück zum Zitat J G Yang. Present situation and prospect of error compensation technology for NC machine tool. Aeronautical Manufacturing Technology, 2012, 48(5): 40–45. (in Chinese) J G Yang. Present situation and prospect of error compensation technology for NC machine tool. Aeronautical Manufacturing Technology, 2012, 48(5): 40–45. (in Chinese)
Zurück zum Zitat C H Wu, Y T Kung. Thermal analysis for the feed drive system of a CNC machine center. International Journal of Machine Tools and Manufacture, 2003, 43(15): 1521–1528. C H Wu, Y T Kung. Thermal analysis for the feed drive system of a CNC machine center. International Journal of Machine Tools and Manufacture, 2003, 43(15): 1521–1528.
Zurück zum Zitat J H Lee, S H Yang. Statistical optimization and assessment of a thermal error model for CNC machine tools. International Journal of Machine Tools and Manufacture, 2002, 42(1): 147–155. J H Lee, S H Yang. Statistical optimization and assessment of a thermal error model for CNC machine tools. International Journal of Machine Tools and Manufacture, 2002, 42(1): 147–155.
Zurück zum Zitat J S Chen, W Y Hsu, Characterizations and models for the thermal growth of a motorized high speed spindle. International Journal of Machine Tools and Manufacture, 2003, 43(11): 1163–1170. J S Chen, W Y Hsu, Characterizations and models for the thermal growth of a motorized high speed spindle. International Journal of Machine Tools and Manufacture, 2003, 43(11): 1163–1170.
Zurück zum Zitat S Yang, J Yuan, J Ni. The improvement of thermal error modeling and compensation on machine tools by CMAC neural network. International Journal of Machine Tools and Manufacture, 1996, 36(4): 527–537. S Yang, J Yuan, J Ni. The improvement of thermal error modeling and compensation on machine tools by CMAC neural network. International Journal of Machine Tools and Manufacture, 1996, 36(4): 527–537.
Zurück zum Zitat C D Mize, J C Ziegert. Neural network thermal error compensation of a machining center. Precision Engineering, 2000, 24(4): 338–346. C D Mize, J C Ziegert. Neural network thermal error compensation of a machining center. Precision Engineering, 2000, 24(4): 338–346.
Zurück zum Zitat D S Lee, J Y Choi, D H Choi. ICA based thermal source extraction and thermal distortion compensation method for a machine tool. International Journal of Machine Tools and Manufacture, 2003, 43(6): 589–597. D S Lee, J Y Choi, D H Choi. ICA based thermal source extraction and thermal distortion compensation method for a machine tool. International Journal of Machine Tools and Manufacture, 2003, 43(6): 589–597.
Zurück zum Zitat H Yang, J Ni. Dynamic neural network modeling for nonlinear, nonstationary machine tool thermally induced error. International Journal of Machine Tools and Manufacture, 2005, 45(4-5): 455–465. H Yang, J Ni. Dynamic neural network modeling for nonlinear, nonstationary machine tool thermally induced error. International Journal of Machine Tools and Manufacture, 2005, 45(4-5): 455–465.
Zurück zum Zitat Y Kang, C W Chang, Y Huang, et al. Modification of a neural network utilizing hybrid filters for the compensation of thermal deformation in machine tools. International Journal of Machine Tools and Manufacture, 2007, 47(2): 376–387. Y Kang, C W Chang, Y Huang, et al. Modification of a neural network utilizing hybrid filters for the compensation of thermal deformation in machine tools. International Journal of Machine Tools and Manufacture, 2007, 47(2): 376–387.
Zurück zum Zitat H Wu, H T Zhang, Q J Guo, et al. Thermal error optimization modeling and real-time compensation on a CNC turning center. Journal of Materials Processing Technology, 2008, 207(1-3): 172–179. H Wu, H T Zhang, Q J Guo, et al. Thermal error optimization modeling and real-time compensation on a CNC turning center. Journal of Materials Processing Technology, 2008, 207(1-3): 172–179.
Zurück zum Zitat Q J Guo, J G Yang, H Wu. Application of ACO-BPN to thermal error modeling of NC machine tool. The International Journal of Advanced Manufacturing Technology, 2010, 50(5): 667–675. Q J Guo, J G Yang, H Wu. Application of ACO-BPN to thermal error modeling of NC machine tool. The International Journal of Advanced Manufacturing Technology, 2010, 50(5): 667–675.
Zurück zum Zitat Y Zhang, J G Yang, H Jiang. Machine tool thermal error modeling and prediction by grey neural network. The International Journal of Advanced Manufacturing Technology, 2012, 59(9): 1065–1072. Y Zhang, J G Yang, H Jiang. Machine tool thermal error modeling and prediction by grey neural network. The International Journal of Advanced Manufacturing Technology, 2012, 59(9): 1065–1072.
Zurück zum Zitat International Organization for Standardization Technical Committees. ISO 230-3-2007 Test code for machine tools–Part 3: Determination of thermal effects. Geneva: International Organization for Standardization, 2007. International Organization for Standardization Technical Committees. ISO 230-3-2007 Test code for machine tools–Part 3: Determination of thermal effects. Geneva: International Organization for Standardization, 2007.
Zurück zum Zitat International Organization for Standardization Technical Committees. ISO 10791-10-2007 Test conditions for machining centres–Part 10: Evaluation of thermal distortion. Geneva: International Organization for Standardization, 2007. International Organization for Standardization Technical Committees. ISO 10791-10-2007 Test conditions for machining centres–Part 10: Evaluation of thermal distortion. Geneva: International Organization for Standardization, 2007.
Zurück zum Zitat International Organization for Standardization Technical Committees. ISO 13041-8-2004 Test conditions for numerically controlled turning machines and turning centres - Part 8: Evaluation of thermal distortions. Geneva: International Organization for Standardization, 2004. International Organization for Standardization Technical Committees. ISO 13041-8-2004 Test conditions for numerically controlled turning machines and turning centres - Part 8: Evaluation of thermal distortions. Geneva: International Organization for Standardization, 2004.
Zurück zum Zitat M Weck, P Mckeown, R Bonse, et al. Reduction and Compensation of Thermal Errors in Machine Tools. CIRP Annals - Manufacturing Technology, 1995, 44(2): 589–598. M Weck, P Mckeown, R Bonse, et al. Reduction and Compensation of Thermal Errors in Machine Tools. CIRP Annals - Manufacturing Technology, 1995, 44(2): 589–598.
Zurück zum Zitat R Ramesh, M A Mannan, A N Poo. Error compensation in machine tools - a review: Part II: thermal errors. International Journal of Machine Tools and Manufacture, 2000, 40(9): 1257–1284. R Ramesh, M A Mannan, A N Poo. Error compensation in machine tools - a review: Part II: thermal errors. International Journal of Machine Tools and Manufacture, 2000, 40(9): 1257–1284.
Zurück zum Zitat R Ramesh, M A Mannan, A N Poo. Thermal error measurement and modelling in machine tools.: Part I. Influence of varying operating conditions. International Journal of Machine Tools and Manufacture, 2003, 43(4): 391–404. R Ramesh, M A Mannan, A N Poo. Thermal error measurement and modelling in machine tools.: Part I. Influence of varying operating conditions. International Journal of Machine Tools and Manufacture, 2003, 43(4): 391–404.
Zurück zum Zitat R Ramesh, M A Mannan, A N Poo, et al. Thermal error measurement and modelling in machine tools. Part II. Hybrid bayesian network-support vector machine model. International Journal of Machine Tools and Manufacture, 2003, 43(4): 405–419. R Ramesh, M A Mannan, A N Poo, et al. Thermal error measurement and modelling in machine tools. Part II. Hybrid bayesian network-support vector machine model. International Journal of Machine Tools and Manufacture, 2003, 43(4): 405–419.
Zurück zum Zitat J W Li, W J Zhang, G S Yang, et al. Thermal-error modeling for complex physical systems: the-state-of-arts review. The international Journal of Advanced Manufacturing Technology, 2009, 42(1): 168–179. J W Li, W J Zhang, G S Yang, et al. Thermal-error modeling for complex physical systems: the-state-of-arts review. The international Journal of Advanced Manufacturing Technology, 2009, 42(1): 168–179.
Zurück zum Zitat J Z Fu, X Y Yao, Y He, et al. Development of thermal error compensation technology for NC machine tool. Aeronautical Manufacturing Technology, 2010 (4): 64–66. (in Chinese) J Z Fu, X Y Yao, Y He, et al. Development of thermal error compensation technology for NC machine tool. Aeronautical Manufacturing Technology, 2010 (4): 64–66. (in Chinese)
Zurück zum Zitat J Mayr, J Jedrzejewski, E Uhlmann, et al. Thermal issues in machine tools. CIRP Annals - Manufacturing Technology, 2012, 61(2): 771–791. J Mayr, J Jedrzejewski, E Uhlmann, et al. Thermal issues in machine tools. CIRP Annals - Manufacturing Technology, 2012, 61(2): 771–791.
Zurück zum Zitat Y Li, W H Zhao, S H Lan, et al. A review on spindle thermal error compensation in machine Tools. International Journal of Machine Tools and Manufacture, 2015, 95: 20–38. Y Li, W H Zhao, S H Lan, et al. A review on spindle thermal error compensation in machine Tools. International Journal of Machine Tools and Manufacture, 2015, 95: 20–38.
Zurück zum Zitat H T Wang, T M Li, L P Wang, et al. Review on thermal error modeling of machine tools. Journal of Mechanical Engineering, 2015, 51(9): 119–128. (in Chinese) H T Wang, T M Li, L P Wang, et al. Review on thermal error modeling of machine tools. Journal of Mechanical Engineering, 2015, 51(9): 119–128. (in Chinese)
Zurück zum Zitat A Palmgren, B Ruley. Ball and roller bearing engineering. Philadelphia: SKF Industries, Inc.,1945. A Palmgren, B Ruley. Ball and roller bearing engineering. Philadelphia: SKF Industries, Inc.,1945.
Zurück zum Zitat T A Harris. Rolling bearing analysis. 4th edition. New York: Wiley, 2001. T A Harris. Rolling bearing analysis. 4th edition. New York: Wiley, 2001.
Zurück zum Zitat Z Q Liu, Y H Zhang, H Su. Thermal analysis of high speed rolling bearing. Lubrication and Sealing, 1998, 4: 66–68. (in Chineses) Z Q Liu, Y H Zhang, H Su. Thermal analysis of high speed rolling bearing. Lubrication and Sealing, 1998, 4: 66–68. (in Chineses)
Zurück zum Zitat J L Stein, J F Tu. A State-space model for monitoring thermally induced preload in anti-friction spindle bearings of high-speed machine tools. Journal of Dynamic Systems Measurement and Control, 1994, 116(3): 372–386. J L Stein, J F Tu. A State-space model for monitoring thermally induced preload in anti-friction spindle bearings of high-speed machine tools. Journal of Dynamic Systems Measurement and Control, 1994, 116(3): 372–386.
Zurück zum Zitat J H Rumbarger, E G Filetti, D Gubernick, et al. Gas turbine engine main shaft roller bearing system analysis. Journal of Lubrication Technology, 1973, 95(4): 401–416. J H Rumbarger, E G Filetti, D Gubernick, et al. Gas turbine engine main shaft roller bearing system analysis. Journal of Lubrication Technology, 1973, 95(4): 401–416.
Zurück zum Zitat G C Chen, L Q Wang, L Gu, et al. Heating analysis of the high speed ball bearing, Journal of Aerospace Power, 2007, 22(1): 163–168. (in Chinese) G C Chen, L Q Wang, L Gu, et al. Heating analysis of the high speed ball bearing, Journal of Aerospace Power, 2007, 22(1): 163–168. (in Chinese)
Zurück zum Zitat R S Moorthy, V P Raja. An improved analytical model for prediction of heat generation in angular contact ball bearing. Arabian Journal for Science and Engineering, 2014, 39(11): 8111–8119. R S Moorthy, V P Raja. An improved analytical model for prediction of heat generation in angular contact ball bearing. Arabian Journal for Science and Engineering, 2014, 39(11): 8111–8119.
Zurück zum Zitat W M Hannon. Rolling-element bearing heat transfer - part I.: Analytic model. Journal of Tribology, 2015, 137(3): 031102. W M Hannon. Rolling-element bearing heat transfer - part I.: Analytic model. Journal of Tribology, 2015, 137(3): 031102.
Zurück zum Zitat F P Incroper, D P Dewitt, T L Bergman, et al. Fundamentals of heat and mass transfer. 6th ed. Beijing: Chemical Industry Press, 2011. (in Chinese) F P Incroper, D P Dewitt, T L Bergman, et al. Fundamentals of heat and mass transfer. 6th ed. Beijing: Chemical Industry Press, 2011. (in Chinese)
Zurück zum Zitat B Bossmanns, J F Tu. A thermal model for high speed motorized spindles. International Journal of Machine Tools and Manufacture, 1999, 39(9): 1345–1366. B Bossmanns, J F Tu. A thermal model for high speed motorized spindles. International Journal of Machine Tools and Manufacture, 1999, 39(9): 1345–1366.
Zurück zum Zitat B Bossmanns, J F Tu. A power flow model for high speed motorized spindles - heat generation characterization. Journal of Manufacturing Science and Engineering, 2001,123(3): 494–505. B Bossmanns, J F Tu. A power flow model for high speed motorized spindles - heat generation characterization. Journal of Manufacturing Science and Engineering, 2001,123(3): 494–505.
Zurück zum Zitat T Holkup, H Cao, P Kolář, et al. Thermo-mechanical model of spindles. CIRP Annals - Manufacturing Technology, 2010, 59(1): 365–368. T Holkup, H Cao, P Kolář, et al. Thermo-mechanical model of spindles. CIRP Annals - Manufacturing Technology, 2010, 59(1): 365–368.
Zurück zum Zitat J Takabi, M M Khonsari. Experimental testing and thermal analysis of ball bearings. Tribology International, 2013, 60(7): 93–103. J Takabi, M M Khonsari. Experimental testing and thermal analysis of ball bearings. Tribology International, 2013, 60(7): 93–103.
Zurück zum Zitat J Jędrzejewski, Z Kowal, W Kwaśny, et al. High-speed precise machine tools spindle units improving. Journal of Materials Processing Technology, 2005, 162-163: 615–621. J Jędrzejewski, Z Kowal, W Kwaśny, et al. High-speed precise machine tools spindle units improving. Journal of Materials Processing Technology, 2005, 162-163: 615–621.
Zurück zum Zitat K S Kim, D W Lee, S M Lee, et al. A numerical approach to determine the frictional torque and temperature of an angular contact ball bearing in a spindle system. International Journal of Precision Engineering and Manufacturing, 2015, 16(1): 135–142. K S Kim, D W Lee, S M Lee, et al. A numerical approach to determine the frictional torque and temperature of an angular contact ball bearing in a spindle system. International Journal of Precision Engineering and Manufacturing, 2015, 16(1): 135–142.
Zurück zum Zitat Z C Du, S Y Yao, J G Yang. Thermal behavior analysis and thermal error compensation for motorized spindle of machine tools. International Journal of Precision Engineering and Manufacturing, 2015, 16(7): 1571–1581. Z C Du, S Y Yao, J G Yang. Thermal behavior analysis and thermal error compensation for motorized spindle of machine tools. International Journal of Precision Engineering and Manufacturing, 2015, 16(7): 1571–1581.
Zurück zum Zitat J Y Xia, B Wu, Y M Hu, et al. Experimental research on factors influencing thermal dynamics characteristics of feed system. Precision Engineering, 2010, 34(2): 357–368. J Y Xia, B Wu, Y M Hu, et al. Experimental research on factors influencing thermal dynamics characteristics of feed system. Precision Engineering, 2010, 34(2): 357–368.
Zurück zum Zitat Z Z Xu, X J Liu, C H Choi, et al. A study on improvement of ball screw system positioning error with liquid-cooling. International Journal of Precision Engineering and Manufacturing, 2012, 13(12): 2173–2181. Z Z Xu, X J Liu, C H Choi, et al. A study on improvement of ball screw system positioning error with liquid-cooling. International Journal of Precision Engineering and Manufacturing, 2012, 13(12): 2173–2181.
Zurück zum Zitat W S Yun, S K Kim, D W Cho. Thermal error analysis for a CNC lathe feed drive system. International Journal of Machine Tools and Manufacture, 1999, 39(7): 1087–1101 W S Yun, S K Kim, D W Cho. Thermal error analysis for a CNC lathe feed drive system. International Journal of Machine Tools and Manufacture, 1999, 39(7): 1087–1101
Zurück zum Zitat J Mayr, M Ess, S Weikert, et al. Thermal behaviour improvement of linear axis . Proceedings of 11th euspen International Conference, Como, Italy, May 23-26, 2011: 291–294. J Mayr, M Ess, S Weikert, et al. Thermal behaviour improvement of linear axis . Proceedings of 11th euspen International Conference, Como, Italy, May 23-26, 2011: 291–294.
Zurück zum Zitat Z Z Xu, X J Liu, S K Lyu. Study on positioning accuracy of nut/shaft air cooling ball screw for high-precision feed drive. International Journal of Precision Engineering and Manufacturing, 2014, 15(1): 123–128. Z Z Xu, X J Liu, S K Lyu. Study on positioning accuracy of nut/shaft air cooling ball screw for high-precision feed drive. International Journal of Precision Engineering and Manufacturing, 2014, 15(1): 123–128.
Zurück zum Zitat S K Kim, D W Cho. Real-time estimation of temperature distribution in a ball-screw system. International Journal of Machine Tools and Manufacture, 1997, 37(4): 451–464. S K Kim, D W Cho. Real-time estimation of temperature distribution in a ball-screw system. International Journal of Machine Tools and Manufacture, 1997, 37(4): 451–464.
Zurück zum Zitat M F Zaeh, T Oertli, J Milberg. Finite element modelling of ball screw feed drive systems. CIRP Annals - Manufacturing Technology, 2004, 53(2): 289–292. M F Zaeh, T Oertli, J Milberg. Finite element modelling of ball screw feed drive systems. CIRP Annals - Manufacturing Technology, 2004, 53(2): 289–292.
Zurück zum Zitat C Jin, B Wu, Y M Hu. Heat generation modeling of ball bearing based on internal load distribution. Tribology International, 2012, 45(1): 8–15. C Jin, B Wu, Y M Hu. Heat generation modeling of ball bearing based on internal load distribution. Tribology International, 2012, 45(1): 8–15.
Zurück zum Zitat C Jin, B Wu, Y M Hu, et al. Temperature distribution and thermal error prediction of a CNC feed system under varying operating conditions. Precision Engineering, 2015, 77(9–12): 1979–1992. C Jin, B Wu, Y M Hu, et al. Temperature distribution and thermal error prediction of a CNC feed system under varying operating conditions. Precision Engineering, 2015, 77(9–12): 1979–1992.
Zurück zum Zitat C Jin, B Wu, Y M Hu, et al. Thermal characteristics of a CNC feed system under varying operating conditions. Precision Engineering, 2015, 42(9-12): 151–164. C Jin, B Wu, Y M Hu, et al. Thermal characteristics of a CNC feed system under varying operating conditions. Precision Engineering, 2015, 42(9-12): 151–164.
Zurück zum Zitat B Tan, X Y Mao, H Q Liu, et al. A thermal error model for large machine tools that considers environmental thermal hysteresis effects. International Journal of Machine Tools and Manufacture, 2014. 82-83(7): 11–20. B Tan, X Y Mao, H Q Liu, et al. A thermal error model for large machine tools that considers environmental thermal hysteresis effects. International Journal of Machine Tools and Manufacture, 2014. 82-83(7): 11–20.
Zurück zum Zitat C X Zhang, F Gao, Y Li. Thermal error characteristic analysis and modeling for machine tools due to time-varying environmental temperature. Precision Engineering, 2017, 47: 231–238. C X Zhang, F Gao, Y Li. Thermal error characteristic analysis and modeling for machine tools due to time-varying environmental temperature. Precision Engineering, 2017, 47: 231–238.
Zurück zum Zitat N S Mian, S Fletcher, A P Longstaff, et al. Efficient thermal error prediction in a machine tool using finite element analysis. Measurement Science and Technology, 2011, 22(8): 085107. N S Mian, S Fletcher, A P Longstaff, et al. Efficient thermal error prediction in a machine tool using finite element analysis. Measurement Science and Technology, 2011, 22(8): 085107.
Zurück zum Zitat N S Mian, S Fletcher, A P Longstaff, et al. Efficient estimation by FEA of machine tool distortion due to environmental temperature perturbations. Precision Engineering, 2013, 37(2): 372–379. N S Mian, S Fletcher, A P Longstaff, et al. Efficient estimation by FEA of machine tool distortion due to environmental temperature perturbations. Precision Engineering, 2013, 37(2): 372–379.
Zurück zum Zitat J F Zhang, P F Feng, CHEN C, et al. A method for thermal performance modeling and simulation of machine tools. The International Journal of Advanced Manufacturing Technology, 2013, 68(5): 1517–1527. J F Zhang, P F Feng, CHEN C, et al. A method for thermal performance modeling and simulation of machine tools. The International Journal of Advanced Manufacturing Technology, 2013, 68(5): 1517–1527.
Zurück zum Zitat J Mayr, S Weikert, Wegener K, et al. Comparing the thermo-mechanical-behaviour of machine tool frame designs using a FDM-FEA simulation approach . Proceedings of the 22nd Annual ASPE Meeting, Dallas, TX, United states, October 14-19, 2007: 17–20. J Mayr, S Weikert, Wegener K, et al. Comparing the thermo-mechanical-behaviour of machine tool frame designs using a FDM-FEA simulation approach . Proceedings of the 22nd Annual ASPE Meeting, Dallas, TX, United states, October 14-19, 2007: 17–20.
Zurück zum Zitat J Mayr, M Ess, S Weikert, et al. Calculating thermal location and component errors on machine tools . Proceedings of the 24nd Annual ASPE Meeting, Monterey, CA, United states, October 4-9, 2009. J Mayr, M Ess, S Weikert, et al. Calculating thermal location and component errors on machine tools . Proceedings of the 24nd Annual ASPE Meeting, Monterey, CA, United states, October 4-9, 2009.
Zurück zum Zitat J Mayr, M Ess, S Weikert, et al. Compensation of thermal effects on machine tools using a FDEM simulation approach // 9th International Conference and Exhibition on Laser Metrology, Machine Tool, CMM and Robotic Performance, Uxbridge, United kingdom, June 30-July 2, 2009: 38–47. J Mayr, M Ess, S Weikert, et al. Compensation of thermal effects on machine tools using a FDEM simulation approach // 9th International Conference and Exhibition on Laser Metrology, Machine Tool, CMM and Robotic Performance, Uxbridge, United kingdom, June 30-July 2, 2009: 38–47.
Zurück zum Zitat F L M Delbressine, G H J Florussen, L A Schijvenaars, et al. Modelling thermomechanical behaviour of multi-axis machine tools. Precision Engineering, 2006, 30(1): 47–53. F L M Delbressine, G H J Florussen, L A Schijvenaars, et al. Modelling thermomechanical behaviour of multi-axis machine tools. Precision Engineering, 2006, 30(1): 47–53.
Zurück zum Zitat J Yang, X S Mei, B Feng, et al. Experiments and simulation of thermal behaviors of the dual-drive servo feed system. Chinese Journal of Mechanical Engineering, 2015, 28(1): 76–87. J Yang, X S Mei, B Feng, et al. Experiments and simulation of thermal behaviors of the dual-drive servo feed system. Chinese Journal of Mechanical Engineering, 2015, 28(1): 76–87.
Zurück zum Zitat C Jin, B Wu, Y M Hu. Wavelet neural network based on NARMA-L2 model for prediction of thermal characteristics in a feed system. Chinese Journal of Mechanical Engineering, 2011, 24(1): 33–41. C Jin, B Wu, Y M Hu. Wavelet neural network based on NARMA-L2 model for prediction of thermal characteristics in a feed system. Chinese Journal of Mechanical Engineering, 2011, 24(1): 33–41.
Zurück zum Zitat J Zhu, J Ni, A J Shih. Robust machine tool thermal error modeling through thermal mode concept. Journal of Manufacturing Science and Engineering, 2008, 130(6): 061006. J Zhu, J Ni, A J Shih. Robust machine tool thermal error modeling through thermal mode concept. Journal of Manufacturing Science and Engineering, 2008, 130(6): 061006.
Zurück zum Zitat F C Li, H T Wang, T M Li. Research on thermal error modeling and prediction of heavy CNC machine tools. Journal of Mechanical Engineering, 2016, 52(11): 154–160. (in Chinese) F C Li, H T Wang, T M Li. Research on thermal error modeling and prediction of heavy CNC machine tools. Journal of Mechanical Engineering, 2016, 52(11): 154–160. (in Chinese)
Zurück zum Zitat C Chen, J F Zhang, Z J Wu, et al. A real-time measurement method of temperature fields and thermal errors in machine tools // Proceeding of the 2010 International Conference on Digital Manufacturing and Automation, Changsha, China. 2010, 1: 100–103. C Chen, J F Zhang, Z J Wu, et al. A real-time measurement method of temperature fields and thermal errors in machine tools // Proceeding of the 2010 International Conference on Digital Manufacturing and Automation, Changsha, China. 2010, 1: 100–103.
Zurück zum Zitat O Horejš, M Mareš, L Novotný, et al. Advanced modeling of thermally induced displacements and its implementation into standard CNC controller of horizontal milling center. Procedia CIRP, 2012, 4: 67–72. O Horejš, M Mareš, L Novotný, et al. Advanced modeling of thermally induced displacements and its implementation into standard CNC controller of horizontal milling center. Procedia CIRP, 2012, 4: 67–72.
Zurück zum Zitat J Vyroubal. Compensation of machine tool thermal deformation in spindle axis direction based on decomposition method. Precision Engineering, 2012, 36 (1): 121–127. J Vyroubal. Compensation of machine tool thermal deformation in spindle axis direction based on decomposition method. Precision Engineering, 2012, 36 (1): 121–127.
Zurück zum Zitat H J Pahk, S W Lee. Thermal error measurement and real time compensation system for the CNC machine tools incorporating the spindle thermal error and the feed axis thermal error. The International Journal of Advanced Manufacturing Technology, 2002, 20(7): 487–494. H J Pahk, S W Lee. Thermal error measurement and real time compensation system for the CNC machine tools incorporating the spindle thermal error and the feed axis thermal error. The International Journal of Advanced Manufacturing Technology, 2002, 20(7): 487–494.
Zurück zum Zitat H Yang, J Ni. Dynamic modeling for machine tool thermal error compensation. Journal of Manufacturing Science and Engineering, 2003, 125(2): 245–254. H Yang, J Ni. Dynamic modeling for machine tool thermal error compensation. Journal of Manufacturing Science and Engineering, 2003, 125(2): 245–254.
Zurück zum Zitat D Werschmoeller, X C Li. Measurement of tool internal temperatures in the tool - chip contact region by embedded micro thin film thermocouples. Journal of Manufacturing Processes, 2011, 13(2): 147–152. D Werschmoeller, X C Li. Measurement of tool internal temperatures in the tool - chip contact region by embedded micro thin film thermocouples. Journal of Manufacturing Processes, 2011, 13(2): 147–152.
Zurück zum Zitat J Liu, G Chen, C H Ji, et al. An investigation of workpiece temperature variation of helical milling for carbon fiber reinforced plastics (CFRP). International Journal of Machine Tools and Manufacture, 2014, 86(11):89–103. J Liu, G Chen, C H Ji, et al. An investigation of workpiece temperature variation of helical milling for carbon fiber reinforced plastics (CFRP). International Journal of Machine Tools and Manufacture, 2014, 86(11):89–103.
Zurück zum Zitat J Qiu, C S Liu, Q W Liu, et al. Thermal errors of planer type NC machine tools and its improvement measures. Journal of Mechanical Engineering, 2012,48(21): 149–157. (in Chinese) J Qiu, C S Liu, Q W Liu, et al. Thermal errors of planer type NC machine tools and its improvement measures. Journal of Mechanical Engineering, 2012,48(21): 149–157. (in Chinese)
Zurück zum Zitat C W Wu, C H Tang, C F Chang, et al. Thermal error compensation method for machine center. International Journal of Advanced Manufacturing Technology, 2012, 59(5): 681–689. C W Wu, C H Tang, C F Chang, et al. Thermal error compensation method for machine center. International Journal of Advanced Manufacturing Technology, 2012, 59(5): 681–689.
Zurück zum Zitat E Uhlmann, J Hu. Thermal modelling of a high speed motor spindle. Procedia Cirp, 2012, 1: 313–318. E Uhlmann, J Hu. Thermal modelling of a high speed motor spindle. Procedia Cirp, 2012, 1: 313–318.
Zurück zum Zitat T Zhang, W H Ye, R J Liang, et al. Study on thermal behavior analysis of nut/shaft air cooling ball screw for high-precision feed drive. Chinese Journal of Mechanical Engineering, 2013, 26(1): 158–165. T Zhang, W H Ye, R J Liang, et al. Study on thermal behavior analysis of nut/shaft air cooling ball screw for high-precision feed drive. Chinese Journal of Mechanical Engineering, 2013, 26(1): 158–165.
Zurück zum Zitat American National Standards Institute. ANSI/ASME B5.54-2005 Methods for Performance Evaluation of Computer Numerically Controlled Machining Centers. Washington: American National Standards Institute, 2005. American National Standards Institute. ANSI/ASME B5.54-2005 Methods for Performance Evaluation of Computer Numerically Controlled Machining Centers. Washington: American National Standards Institute, 2005.
Zurück zum Zitat H Schwenke, W Knapp, H Haitjema, et al. Geometric error measurement and compensation of machines: an update. CIRP Annals - Manufacturing Technology, 2008, 57(2): 660–675. H Schwenke, W Knapp, H Haitjema, et al. Geometric error measurement and compensation of machines: an update. CIRP Annals - Manufacturing Technology, 2008, 57(2): 660–675.
Zurück zum Zitat A R J Ruiz, J G Rosas, F S Granja, et al. A real-time tool positioning sensor for machine-tools. Sensors, 2009, 9(10): 7622–7647. A R J Ruiz, J G Rosas, F S Granja, et al. A real-time tool positioning sensor for machine-tools. Sensors, 2009, 9(10): 7622–7647.
Zurück zum Zitat E Gomez-Acedo, A Olarra, L N L D L Calle. A method for thermal characterization and modeling of large gantry-type machine tools. The International Journal of Advanced Manufacturing Technology, 2012, 62(9): 875–886. E Gomez-Acedo, A Olarra, L N L D L Calle. A method for thermal characterization and modeling of large gantry-type machine tools. The International Journal of Advanced Manufacturing Technology, 2012, 62(9): 875–886.
Zurück zum Zitat S K Lee, J H Yoo, M S Yang. Effect of thermal deformation on machine tool slide guide motion. Tribology International, 2003, 36(1): 41–47. S K Lee, J H Yoo, M S Yang. Effect of thermal deformation on machine tool slide guide motion. Tribology International, 2003, 36(1): 41–47.
Zurück zum Zitat Z D Zhou, Y G Tan, M Y Liu, et al. Actualities and development on dynamic monitoring and diagnosis with distributed fiber Bragg Grating in mechanical systems. Journal of Mechanical Engineering, 2013, 49(19): 55–69. (in Chinese) Z D Zhou, Y G Tan, M Y Liu, et al. Actualities and development on dynamic monitoring and diagnosis with distributed fiber Bragg Grating in mechanical systems. Journal of Mechanical Engineering, 2013, 49(19): 55–69. (in Chinese)
Zurück zum Zitat H N Li, L Ren. Structural health monitoring based on fiber grating sensing technology. Beijing: China Building Industry Press, 2008. (in Chinese) H N Li, L Ren. Structural health monitoring based on fiber grating sensing technology. Beijing: China Building Industry Press, 2008. (in Chinese)
Zurück zum Zitat N Hirayama, Y Sano. Fiber Bragg grating temperature sensor for practical use. ISA Trans, 2000, 39(2): 169–173. N Hirayama, Y Sano. Fiber Bragg grating temperature sensor for practical use. ISA Trans, 2000, 39(2): 169–173.
Zurück zum Zitat D G Kim, H C Kang, J K Pan, et al. Sensitivity enhancement of a fiber Bragg grating temperature sensor combined with a bimetallic strip. Microwave and Optical Technology Letters, 2014, 56(8): 1926–1929. D G Kim, H C Kang, J K Pan, et al. Sensitivity enhancement of a fiber Bragg grating temperature sensor combined with a bimetallic strip. Microwave and Optical Technology Letters, 2014, 56(8): 1926–1929.
Zurück zum Zitat Y G Zhan. Study on high resolution optical fiber grating temperature sensor research. Chinese Journal of Lasers, 2005, 32(1): 83–86. (in Chinese) Y G Zhan. Study on high resolution optical fiber grating temperature sensor research. Chinese Journal of Lasers, 2005, 32(1): 83–86. (in Chinese)
Zurück zum Zitat W He, X D Xu, D S Jiang. High-sensitivity fiber Bragg grating temperature sensor with polymer jacket and its low-temperature characteristic. Acta Optica Sinica, 2004, 24(10): 1316–1319. (in Chinese) W He, X D Xu, D S Jiang. High-sensitivity fiber Bragg grating temperature sensor with polymer jacket and its low-temperature characteristic. Acta Optica Sinica, 2004, 24(10): 1316–1319. (in Chinese)
Zurück zum Zitat C H Lee, M K Kim, K T Kim, et al. Enhanced temperature sensitivity of fiber Bragg grating temperature sensor using thermal expansion of copper tube. Microwave and Optical Technology Letters, 2011, 53(7): 1669–1671. C H Lee, M K Kim, K T Kim, et al. Enhanced temperature sensitivity of fiber Bragg grating temperature sensor using thermal expansion of copper tube. Microwave and Optical Technology Letters, 2011, 53(7): 1669–1671.
Zurück zum Zitat C Lupi, F Felli, A Brotzu, et al. Improving FBG sensor sensitivity at cryogenic temperature by metal coating. IEEE Sensors Journal, 2008, 8(7): 1299–1304. C Lupi, F Felli, A Brotzu, et al. Improving FBG sensor sensitivity at cryogenic temperature by metal coating. IEEE Sensors Journal, 2008, 8(7): 1299–1304.
Zurück zum Zitat Y L Li, H Zhang, Y Feng, et al. Metal coating of fiber Bragg grating and the temperature sensing character after metallization. Optical Fiber Technology, 2009, 15(4): 391–397. Y L Li, H Zhang, Y Feng, et al. Metal coating of fiber Bragg grating and the temperature sensing character after metallization. Optical Fiber Technology, 2009, 15(4): 391–397.
Zurück zum Zitat Y Feng, H Zhang, Y L Li, et al. Temperature sensing of metal-coated fiber Bragg grating. IEEE/ASME Transactions on Mechatronics, 2010, 15(4): 511–519. Y Feng, H Zhang, Y L Li, et al. Temperature sensing of metal-coated fiber Bragg grating. IEEE/ASME Transactions on Mechatronics, 2010, 15(4): 511–519.
Zurück zum Zitat R S Shen, J Zhang, Y Wang, et al. Study on high-temperature and high-pressure measurement by using metal-coated FBG. Microwave and Optical Technology Letters, 2008, 50(5): 1138–1140. R S Shen, J Zhang, Y Wang, et al. Study on high-temperature and high-pressure measurement by using metal-coated FBG. Microwave and Optical Technology Letters, 2008, 50(5): 1138–1140.
Zurück zum Zitat M J Guo, D S Jiang. Low temperature properties of fiber Bragg grating temperature sensor with plating gold. Chinese Journal of Low Temperature Physics, 2006, 28(2): 138–141. (in Chinese) M J Guo, D S Jiang. Low temperature properties of fiber Bragg grating temperature sensor with plating gold. Chinese Journal of Low Temperature Physics, 2006, 28(2): 138–141. (in Chinese)
Zurück zum Zitat Y G Zhan, S L Xue, Q Y Yang, et al. A novel fiber Bragg grating high-temperature sensor. Optik - International Journal for Light and Electron Optics, 2008, 119(11): 535–539. Y G Zhan, S L Xue, Q Y Yang, et al. A novel fiber Bragg grating high-temperature sensor. Optik - International Journal for Light and Electron Optics, 2008, 119(11): 535–539.
Zurück zum Zitat Y Liu, Z D Zhou, E L Zhang, et al. Measurement error of surface-mounted fiber Bragg grating temperature sensor. Review of Scientific Instruments, 2014, 85(6): 064905. Y Liu, Z D Zhou, E L Zhang, et al. Measurement error of surface-mounted fiber Bragg grating temperature sensor. Review of Scientific Instruments, 2014, 85(6): 064905.
Zurück zum Zitat Y Liu, J Zhang. Model Study of the Influence of ambient temperature and installation types on surface temperature measurement by using a fiber Bragg grating sensor. Sensors, 2016, 16(7): 975. Y Liu, J Zhang. Model Study of the Influence of ambient temperature and installation types on surface temperature measurement by using a fiber Bragg grating sensor. Sensors, 2016, 16(7): 975.
Zurück zum Zitat M Y Liu, E L Zhang, Z D Zhou, et al. Measurement of temperature field for the spindle of machine tool based on optical fiber Bragg grating sensors. Advances in Mechanical Engineering, 2013, 2: 940626. M Y Liu, E L Zhang, Z D Zhou, et al. Measurement of temperature field for the spindle of machine tool based on optical fiber Bragg grating sensors. Advances in Mechanical Engineering, 2013, 2: 940626.
Zurück zum Zitat Y F Dong, Z D Zhou, Z C Liu, et al. Temperature field measurement of spindle ball bearing under radial force based on fiber Bragg grating sensors. Advances in Mechanical Engineering, 2015, 7(12): 1–6. Y F Dong, Z D Zhou, Z C Liu, et al. Temperature field measurement of spindle ball bearing under radial force based on fiber Bragg grating sensors. Advances in Mechanical Engineering, 2015, 7(12): 1–6.
Zurück zum Zitat J Huang, Z D Zhou, M Y Liu, et al. Real-time measurement of temperature field in heavy-duty machine tools using fiber Bragg grating sensors and analysis of thermal shift errors. Mechatronics, 2015, 31: 16–21. J Huang, Z D Zhou, M Y Liu, et al. Real-time measurement of temperature field in heavy-duty machine tools using fiber Bragg grating sensors and analysis of thermal shift errors. Mechatronics, 2015, 31: 16–21.
Zurück zum Zitat N S Kim, N S Cho. Estimating deflection of a simple beam model using fiber optic bragg-grating sensors. Experimental Mechanics, 2004, 44(4): 433–439. N S Kim, N S Cho. Estimating deflection of a simple beam model using fiber optic bragg-grating sensors. Experimental Mechanics, 2004, 44(4): 433–439.
Zurück zum Zitat S J Chang, N S Kim. Estimation of displacement response from FBG strain sensors using empirical mode decomposition technique. Experimental Mechanics, 2012, 52(6): 573–589. S J Chang, N S Kim. Estimation of displacement response from FBG strain sensors using empirical mode decomposition technique. Experimental Mechanics, 2012, 52(6): 573–589.
Zurück zum Zitat L H Kang, D K Kim, J H Han, Estimation of dynamic structural displacements using fiber Bragg grating strain sensors. Journal of Sound and Vibration, 2007, 305(3): 534–542. L H Kang, D K Kim, J H Han, Estimation of dynamic structural displacements using fiber Bragg grating strain sensors. Journal of Sound and Vibration, 2007, 305(3): 534–542.
Zurück zum Zitat D Kang, W Chung. Integrated monitoring scheme for a maglev guideway using multiplexed FBG sensor arrays. NDT & E International, 2009, 42(4): 260–266. D Kang, W Chung. Integrated monitoring scheme for a maglev guideway using multiplexed FBG sensor arrays. NDT & E International, 2009, 42(4): 260–266.
Zurück zum Zitat J C Yi, X J Zhu, H S Zhang, et al. Spatial shape reconstruction using orthogonal fiber Bragg grating sensor array. Mechatronics, 2012, 22(6): 679–687. J C Yi, X J Zhu, H S Zhang, et al. Spatial shape reconstruction using orthogonal fiber Bragg grating sensor array. Mechatronics, 2012, 22(6): 679–687.
Zurück zum Zitat P Bosetti, S Bruschi. Enhancing positioning accuracy of CNC machine tools by means of direct measurement of deformation. The International Journal of Advanced Manufacturing Technology, 2012, 58(5-8): 651–662. P Bosetti, S Bruschi. Enhancing positioning accuracy of CNC machine tools by means of direct measurement of deformation. The International Journal of Advanced Manufacturing Technology, 2012, 58(5-8): 651–662.
Zurück zum Zitat F Biral, P Bosetti, R Oboe, et al. A new direct deformation sensor for active compensation of positioning errors in large milling machines .9th IEEE International Workshop on Advanced Motion Control, Istanbul, Turkey, March 27-29, 2006: 126–131. F Biral, P Bosetti, R Oboe, et al. A new direct deformation sensor for active compensation of positioning errors in large milling machines .9th IEEE International Workshop on Advanced Motion Control, Istanbul, Turkey, March 27-29, 2006: 126–131.
Zurück zum Zitat F Biral, P Bosetti. On-line measurement and compensation of geometrical errors for Cartesian numerical control machines . 9th IEEE International Workshop on Advanced Motion Control, Istanbul, Turkey, March 27-29, 2006: 120–125. F Biral, P Bosetti. On-line measurement and compensation of geometrical errors for Cartesian numerical control machines . 9th IEEE International Workshop on Advanced Motion Control, Istanbul, Turkey, March 27-29, 2006: 120–125.
Zurück zum Zitat Y Liu, M Y Liu, C X Yi, et al. Measurement of the deformation field for machine tool based on optical fiber Bragg grating sensors . 2014 International Conference on Innovative Design and Manufacturing, Quebec, Canada, August 13-15, 2014: 222–226. Y Liu, M Y Liu, C X Yi, et al. Measurement of the deformation field for machine tool based on optical fiber Bragg grating sensors . 2014 International Conference on Innovative Design and Manufacturing, Quebec, Canada, August 13-15, 2014: 222–226.
Zurück zum Zitat R Y Li, Y G Tan, Y Liu, et al. A new deformation measurement method for heavy-duty machine tool base by multipoint distributed FBG sensors . Applied Optics and Photonics, China: Optical Fiber Sensors and Applications (AOPC 2015), Beijing, China, May 5-7, 2015: 967903. R Y Li, Y G Tan, Y Liu, et al. A new deformation measurement method for heavy-duty machine tool base by multipoint distributed FBG sensors . Applied Optics and Photonics, China: Optical Fiber Sensors and Applications (AOPC 2015), Beijing, China, May 5-7, 2015: 967903.
Zurück zum Zitat R Y Li, Y G Tan, L Hong, et al. A temperature-independent force transducer using one optical fiber with multiple Bragg gratings. IEICE Electronic Express, 2016, 13(10): 20160198. R Y Li, Y G Tan, L Hong, et al. A temperature-independent force transducer using one optical fiber with multiple Bragg gratings. IEICE Electronic Express, 2016, 13(10): 20160198.
Zurück zum Zitat Y Li, Q Liu, R Tong, et al. Shared and service-oriented CNC machining system for intelligent manufacturing process. Chinese Journal of Mechanical Engineering, 2015, 28(6): 1100–1108. Y Li, Q Liu, R Tong, et al. Shared and service-oriented CNC machining system for intelligent manufacturing process. Chinese Journal of Mechanical Engineering, 2015, 28(6): 1100–1108.
Zurück zum Zitat R Harrison, D Vera, B Ahmad. Engineering the smart factory. Chinese Journal of Mechanical Engineering, 2016, 29(6): 1046–1051. R Harrison, D Vera, B Ahmad. Engineering the smart factory. Chinese Journal of Mechanical Engineering, 2016, 29(6): 1046–1051.
Zu-De Zhou
Lin Gui
Yue-Gang Tan
Ming-Yao Liu
Yi Liu
Rui-Ya Li
Research and Development Trend of Shape Control for Cold Rolling Strip
Structure of Micro-nano WC-10Co4Cr Coating and Cavitation Erosion Resistance in NaCl Solution
Exploring Challenges in Developing a Smart and Effective Assistive System for Improving the Experience of the Elderly Drivers
Design and Dynamic Model of a Frog-inspired Swimming Robot Powered by Pneumatic Muscles
Future Digital Design and Manufacturing: Embracing Industry 4.0 and Beyond | CommonCrawl |
Hypercomplex number
In mathematics, hypercomplex number is a traditional term for an element of a finite-dimensional unital algebra over the field of real numbers. The study of hypercomplex numbers in the late 19th century forms the basis of modern group representation theory.
Not to be confused with surcomplex number.
"Hypernumber" redirects here. For the extension of the real numbers used in non-standard analysis, see Hyperreal number.
History
In the nineteenth century number systems called quaternions, tessarines, coquaternions, biquaternions, and octonions became established concepts in mathematical literature, added to the real and complex numbers. The concept of a hypercomplex number covered them all, and called for a discipline to explain and classify them.
The cataloguing project began in 1872 when Benjamin Peirce first published his Linear Associative Algebra, and was carried forward by his son Charles Sanders Peirce.[1] Most significantly, they identified the nilpotent and the idempotent elements as useful hypercomplex numbers for classifications. The Cayley–Dickson construction used involutions to generate complex numbers, quaternions, and octonions out of the real number system. Hurwitz and Frobenius proved theorems that put limits on hypercomplexity: Hurwitz's theorem says finite-dimensional real composition algebras are the reals $\mathbb {R} $, the complexes $\mathbb {C} $, the quaternions $\mathbb {H} $, and the octonions $\mathbb {O} $, and the Frobenius theorem says the only real associative division algebras are $\mathbb {R} $, $\mathbb {C} $, and $\mathbb {H} $. In 1958 J. Frank Adams published a further generalization in terms of Hopf invariants on H-spaces which still limits the dimension to 1, 2, 4, or 8.[2]
It was matrix algebra that harnessed the hypercomplex systems. First, matrices contributed new hypercomplex numbers like 2 × 2 real matrices (see Split-quaternion). Soon the matrix paradigm began to explain the others as they became represented by matrices and their operations. In 1907 Joseph Wedderburn showed that associative hypercomplex systems could be represented by square matrices, or direct product of algebras of square matrices.[3][4] From that date the preferred term for a hypercomplex system became associative algebra as seen in the title of Wedderburn's thesis at University of Edinburgh. Note however, that non-associative systems like octonions and hyperbolic quaternions represent another type of hypercomplex number.
As Hawkins[5] explains, the hypercomplex numbers are stepping stones to learning about Lie groups and group representation theory. For instance, in 1929 Emmy Noether wrote on "hypercomplex quantities and representation theory".[6] In 1973 Kantor and Solodovnikov published a textbook on hypercomplex numbers which was translated in 1989.[7][8]
Karen Parshall has written a detailed exposition of the heyday of hypercomplex numbers,[9] including the role of mathematicians including Theodor Molien[10] and Eduard Study.[11] For the transition to modern algebra, Bartel van der Waerden devotes thirty pages to hypercomplex numbers in his History of Algebra.[12]
Definition
A definition of a hypercomplex number is given by Kantor & Solodovnikov (1989) as an element of a unital, but not necessarily associative or commutative, finite-dimensional algebra over the real numbers. Elements are generated with real number coefficients $(a_{0},\dots ,a_{n})$ for a basis $\{1,i_{1},\dots ,i_{n}\}$. Where possible, it is conventional to choose the basis so that $i_{k}^{2}\in \{-1,0,+1\}$. A technical approach to hypercomplex numbers directs attention first to those of dimension two.
Two-dimensional real algebras
Theorem:[7]: 14, 15 [13][14] Up to isomorphism, there are exactly three 2-dimensional unital algebras over the reals: the ordinary complex numbers, the split-complex numbers, and the dual numbers. In particular, every 2-dimensional unital algebra over the reals is associative and commutative.
Proof: Since the algebra is 2-dimensional, we can pick a basis {1, u}. Since the algebra is closed under squaring, the non-real basis element u squares to a linear combination of 1 and u:
$u^{2}=a_{0}+a_{1}u$
for some real numbers a0 and a1.
Using the common method of completing the square by subtracting a1u and adding the quadratic complement a2
1
/ 4 to both sides yields
$u^{2}-a_{1}u+{\frac {1}{4}}a_{1}^{2}=a_{0}+{\frac {1}{4}}a_{1}^{2}.$
Thus $ \left(u-{\frac {1}{2}}a_{1}\right)^{2}={\tilde {u}}^{2}$ where $ {\tilde {u}}^{2}~=a_{0}+{\frac {1}{4}}a_{1}^{2}.$ The three cases depend on this real value:
• If 4a0 = −a12, the above formula yields ũ2 = 0. Hence, ũ can directly be identified with the nilpotent element $\epsilon $ of the basis $\{1,~\epsilon \}$ of the dual numbers.
• If 4a0 > −a12, the above formula yields ũ2 > 0. This leads to the split-complex numbers which have normalized basis $\{1,~j\}$ with $j^{2}=+1$. To obtain j from ũ, the latter must be divided by the positive real number $ a\mathrel {:=} {\sqrt {a_{0}+{\frac {1}{4}}a_{1}^{2}}}$ which has the same square as ũ has.
• If 4a0 < −a12, the above formula yields ũ2 < 0. This leads to the complex numbers which have normalized basis $\{1,~i\}$ with $i^{2}=-1$. To yield i from ũ, the latter has to be divided by a positive real number $ a\mathrel {:=} {\sqrt {{\frac {1}{4}}a_{1}^{2}-a_{0}}}$ which squares to the negative of ũ2.
The complex numbers are the only 2-dimensional hypercomplex algebra that is a field. Algebras such as the split-complex numbers that include non-real roots of 1 also contain idempotents $ {\frac {1}{2}}(1\pm j)$ and zero divisors $(1+j)(1-j)=0$, so such algebras cannot be division algebras. However, these properties can turn out to be very meaningful, for instance in describing the Lorentz transformations of special relativity.
In a 2004 edition of Mathematics Magazine the 2-dimensional real algebras have been styled the "generalized complex numbers".[15] The idea of cross-ratio of four complex numbers can be extended to the 2-dimensional real algebras.[16]
Higher-dimensional examples (more than one non-real axis)
Clifford algebras
A Clifford algebra is the unital associative algebra generated over an underlying vector space equipped with a quadratic form. Over the real numbers this is equivalent to being able to define a symmetric scalar product, u ⋅ v = 1/2(uv + vu) that can be used to orthogonalise the quadratic form, to give a basis {e1, ..., ek} such that:
${\frac {1}{2}}\left(e_{i}e_{j}+e_{j}e_{i}\right)={\begin{cases}-1,0,+1&i=j,\\0&i\not =j.\end{cases}}$
Imposing closure under multiplication generates a multivector space spanned by a basis of 2k elements, {1, e1, e2, e3, ..., e1e2, ..., e1e2e3, ...}. These can be interpreted as the basis of a hypercomplex number system. Unlike the basis {e1, ..., ek}, the remaining basis elements need not anti-commute, depending on how many simple exchanges must be carried out to swap the two factors. So e1e2 = −e2e1, but e1(e2e3) = +(e2e3)e1.
Putting aside the bases which contain an element ei such that ei2 = 0 (i.e. directions in the original space over which the quadratic form was degenerate), the remaining Clifford algebras can be identified by the label Clp,q($\mathbb {R} $), indicating that the algebra is constructed from p simple basis elements with ei2 = +1, q with ei2 = −1, and where $\mathbb {R} $ indicates that this is to be a Clifford algebra over the reals—i.e. coefficients of elements of the algebra are to be real numbers.
These algebras, called geometric algebras, form a systematic set, which turn out to be very useful in physics problems which involve rotations, phases, or spins, notably in classical and quantum mechanics, electromagnetic theory and relativity.
Examples include: the complex numbers Cl0,1($\mathbb {R} $), split-complex numbers Cl1,0($\mathbb {R} $), quaternions Cl0,2($\mathbb {R} $), split-biquaternions Cl0,3($\mathbb {R} $), split-quaternions Cl1,1($\mathbb {R} $) ≈ Cl2,0($\mathbb {R} $) (the natural algebra of two-dimensional space); Cl3,0($\mathbb {R} $) (the natural algebra of three-dimensional space, and the algebra of the Pauli matrices); and the spacetime algebra Cl1,3($\mathbb {R} $).
The elements of the algebra Clp,q($\mathbb {R} $) form an even subalgebra Cl[0]
q+1,p
($\mathbb {R} $) of the algebra Clq+1,p($\mathbb {R} $), which can be used to parametrise rotations in the larger algebra. There is thus a close connection between complex numbers and rotations in two-dimensional space; between quaternions and rotations in three-dimensional space; between split-complex numbers and (hyperbolic) rotations (Lorentz transformations) in 1+1-dimensional space, and so on.
Whereas Cayley–Dickson and split-complex constructs with eight or more dimensions are not associative with respect to multiplication, Clifford algebras retain associativity at any number of dimensions.
In 1995 Ian R. Porteous wrote on "The recognition of subalgebras" in his book on Clifford algebras. His Proposition 11.4 summarizes the hypercomplex cases:[17]
Let A be a real associative algebra with unit element 1. Then
• 1 generates $\mathbb {R} $ (algebra of real numbers),
• any two-dimensional subalgebra generated by an element e0 of A such that e02 = −1 is isomorphic to $\mathbb {C} $ (algebra of complex numbers),
• any two-dimensional subalgebra generated by an element e0 of A such that e02 = 1 is isomorphic to $\mathbb {R} $2 (pairs of real numbers with component-wise product, isomorphic to the algebra of split-complex numbers),
• any four-dimensional subalgebra generated by a set {e0, e1} of mutually anti-commuting elements of A such that $e_{0}^{2}=e_{1}^{2}=-1$ is isomorphic to $\mathbb {H} $ (algebra of quaternions),
• any four-dimensional subalgebra generated by a set {e0, e1} of mutually anti-commuting elements of A such that $e_{0}^{2}=e_{1}^{2}=1$ is isomorphic to M2($\mathbb {R} $) (2 × 2 real matrices, coquaternions),
• any eight-dimensional subalgebra generated by a set {e0, e1, e2} of mutually anti-commuting elements of A such that $e_{0}^{2}=e_{1}^{2}=e_{2}^{2}=-1$ is isomorphic to 2$\mathbb {H} $ (split-biquaternions),
• any eight-dimensional subalgebra generated by a set {e0, e1, e2} of mutually anti-commuting elements of A such that $e_{0}^{2}=e_{1}^{2}=e_{2}^{2}=1$ is isomorphic to M2($\mathbb {C} $) (2 × 2 complex matrices, biquaternions, Pauli algebra).
For extension beyond the classical algebras, see Classification of Clifford algebras.
Cayley–Dickson construction
Further information: Cayley–Dickson construction
All of the Clifford algebras Clp,q($\mathbb {R} $) apart from the real numbers, complex numbers and the quaternions contain non-real elements that square to +1; and so cannot be division algebras. A different approach to extending the complex numbers is taken by the Cayley–Dickson construction. This generates number systems of dimension 2n, n = 2, 3, 4, ..., with bases $\left\{1,i_{1},\dots ,i_{2^{n}-1}\right\}$, where all the non-real basis elements anti-commute and satisfy $i_{m}^{2}=-1$. In 8 or more dimensions (n ≥ 3) these algebras are non-associative. In 16 or more dimensions (n ≥ 4) these algebras also have zero-divisors.
The first algebras in this sequence are the four-dimensional quaternions, eight-dimensional octonions, and 16-dimensional sedenions. An algebraic symmetry is lost with each increase in dimensionality: quaternion multiplication is not commutative, octonion multiplication is non-associative, and the norm of sedenions is not multiplicative.
The Cayley–Dickson construction can be modified by inserting an extra sign at some stages. It then generates the "split algebras" in the collection of composition algebras instead of the division algebras:
split-complex numbers with basis $\{1,\,i_{1}\}$ satisfying $\ i_{1}^{2}=+1$,
split-quaternions with basis $\{1,\,i_{1},\,i_{2},\,i_{3}\}$ satisfying $\ i_{1}^{2}=-1,\,i_{2}^{2}=i_{3}^{2}=+1$, and
split-octonions with basis $\{1,\,i_{1},\,\dots ,\,i_{7}\}$ satisfying $\ i_{1}^{2}=i_{2}^{2}=i_{3}^{2}=-1$, $\ i_{4}^{2}=i_{5}^{2}=i_{6}^{2}=i_{7}^{2}=+1.$
Unlike the complex numbers, the split-complex numbers are not algebraically closed, and further contain nontrivial zero divisors and non-trivial idempotents. As with the quaternions, split-quaternions are not commutative, but further contain nilpotents; they are isomorphic to the square matrices of dimension two. Split-octonions are non-associative and contain nilpotents.
Tensor products
The tensor product of any two algebras is another algebra, which can be used to produce many more examples of hypercomplex number systems.
In particular taking tensor products with the complex numbers (considered as algebras over the reals) leads to four-dimensional tessarines $\mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} $, eight-dimensional biquaternions $\mathbb {C} \otimes _{\mathbb {R} }\mathbb {H} $, and 16-dimensional complex octonions $\mathbb {C} \otimes _{\mathbb {R} }\mathbb {O} $.
Further examples
• bicomplex numbers: a 4-dimensional vector space over the reals, 2-dimensional over the complex numbers, isomorphic to tessarines.
• multicomplex numbers: 2n-dimensional vector spaces over the reals, 2n−1-dimensional over the complex numbers
• composition algebra: algebra with a quadratic form that composes with the product
See also
• Sedenions
• Thomas Kirkman
• Georg Scheffers
• Richard Brauer
• Hypercomplex analysis
References
1. Peirce, Benjamin (1881), "Linear Associative Algebra", American Journal of Mathematics, 4 (1): 221–6, doi:10.2307/2369153, JSTOR 2369153
2. Adams, J. F. (July 1960), "On the Non-Existence of Elements of Hopf Invariant One" (PDF), Annals of Mathematics, 72 (1): 20–104, CiteSeerX 10.1.1.299.4490, doi:10.2307/1970147, JSTOR 1970147
3. J.H.M. Wedderburn (1908), "On Hypercomplex Numbers", Proceedings of the London Mathematical Society, 6: 77–118, doi:10.1112/plms/s2-6.1.77
4. Emil Artin later generalized Wedderburn's result so it is known as the Artin–Wedderburn theorem
5. Hawkins, Thomas (1972), "Hypercomplex numbers, Lie groups, and the creation of group representation theory", Archive for History of Exact Sciences, 8 (4): 243–287, doi:10.1007/BF00328434, S2CID 120562272
6. Noether, Emmy (1929), "Hyperkomplexe Größen und Darstellungstheorie" [Hypercomplex Quantities and the Theory of Representations], Mathematische Annalen (in German), 30: 641–92, doi:10.1007/BF01187794, S2CID 120464373, archived from the original on 2016-03-29, retrieved 2016-01-14
7. Kantor, I.L., Solodownikow (1978), Hyperkomplexe Zahlen, BSB B.G. Teubner Verlagsgesellschaft, Leipzig
8. Kantor, I. L.; Solodovnikov, A. S. (1989), Hypercomplex numbers, Berlin, New York: Springer-Verlag, ISBN 978-0-387-96980-0, MR 0996029
9. Parshall, Karen (1985), "Joseph H. M. Wedderburn and the structure theory of algebras", Archive for History of Exact Sciences, 32 (3–4): 223–349, doi:10.1007/BF00348450, S2CID 119888377
10. Molien, Theodor (1893), "Ueber Systeme höherer complexer Zahlen", Mathematische Annalen, 41 (1): 83–156, doi:10.1007/BF01443450, S2CID 122333076
11. Study, Eduard (1898), "Theorie der gemeinen und höhern komplexen Grössen", Encyclopädie der mathematischen Wissenschaften, vol. I A, pp. 147–183
12. van der Waerden, B.L. (1985), "10. The discovery of algebras, 11. Structure of algebras", A History of Algebra, Springer, ISBN 3-540-13610X
13. Yaglom, Isaak (1968), Complex Numbers in Geometry, pp. 10–14
14. Ewing, John H., ed. (1991), Numbers, Springer, p. 237, ISBN 3-540-97497-0
15. Harkin, Anthony A.; Harkin, Joseph B. (2004), "Geometry of Generalized Complex Numbers" (PDF), Mathematics Magazine, 77 (2): 118–129, doi:10.1080/0025570X.2004.11953236, S2CID 7837108
16. Brewer, Sky (2013), "Projective Cross-ratio on Hypercomplex Numbers", Advances in Applied Clifford Algebras, 23 (1): 1–14, arXiv:1203.2554, doi:10.1007/s00006-012-0335-7, S2CID 119623082
17. Porteous, Ian R. (1995), Clifford Algebras and the Classical Groups, Cambridge University Press, pp. 88–89, ISBN 0-521-55177-3
Further reading
• Alfsmann, Daniel (2006), "On families of 2^N dimensional hypercomplex algebras suitable for digital signal processing" (PDF), 14th European Signal Processing Conference, Florence, Italy, pp. 1–4
• Artin, Emil (1965) [1928], "Zur Theorie der hyperkomplexen Zahlen; Zur Arithmetik hyperkomplexer Zahlen", in Lang, Serge; Tate, John T. (eds.), The Collected Papers of Emil Artin, Addison-Wesley, pp. 301–345
• Baez, John (2002), "The Octonions", Bulletin of the American Mathematical Society, 39 (2): 145–205, arXiv:math/0105155, doi:10.1090/S0273-0979-01-00934-X, ISSN 0002-9904, S2CID 586512
• Cartan, Élie (1908), "Les systèmes de nombres complex et les groupes de transformations", Encyclopédie des sciences mathématiques pures et appliquées, vol. I 1. and Ouvres Completes T.2 pt. 1, pp 107–246.
• Herzberger, Max (1923), "Ueber Systeme hyperkomplexer Grössen", Doctoral Dissertation, Friedrich Wilhelm University, archived from the original on 2021-01-30, retrieved 2015-09-20
• La Duke, Jeanne (1983), "The study of linear associative algebras in the United States, 1870–1927", in Srinivasan, B.; Sally, J. (eds.), Emmy Noether in Bryn Mawr: Proceedings of a Symposium Sponsored by the Association for Women in Mathematics in Honor of Emmy Noether's 100th Birthday, Springer, pp. 147–159, ISBN 978-0-387-90838-0
• Olariu, Silviu (2002), Complex Numbers in N Dimensions, North-Holland Mathematics Studies, vol. 190, Elsevier, ISBN 0-444-51123-7
• Sabadini, Irene; Shapiro, Michael; Sommen, Frank, eds. (2009), Hypercomplex Analysis and Applications, Birkhauser, ISBN 978-3-7643-9892-7
• Taber, Henry (1904), "On Hypercomplex Number Systems", Transactions of the American Mathematical Society, 5 (4): 509–548, doi:10.2307/1986280, JSTOR 1986280
• MacLagan Wedderburn, J.H. (1908), "On Hypercomplex Numbers", Proceedings of the London Mathematical Society, s2-6 (1): 77–118, doi:10.1112/plms/s2-6.1.77
External links
The Wikibook Abstract Algebra has a page on the topic of: Hypercomplex numbers
• "Hypercomplex number", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Weisstein, Eric W. "Hypercomplex number". MathWorld.
• Study, E., On systems of complex numbers and their application to the theory of transformation groups (PDF) (English translation)
• Frobenius, G., Theory of hypercomplex quantities (PDF) (English translation)
Number systems
Sets of definable numbers
• Natural numbers ($\mathbb {N} $)
• Integers ($\mathbb {Z} $)
• Rational numbers ($\mathbb {Q} $)
• Constructible numbers
• Algebraic numbers ($\mathbb {A} $)
• Closed-form numbers
• Periods
• Computable numbers
• Arithmetical numbers
• Set-theoretically definable numbers
• Gaussian integers
Composition algebras
• Division algebras: Real numbers ($\mathbb {R} $)
• Complex numbers ($\mathbb {C} $)
• Quaternions ($\mathbb {H} $)
• Octonions ($\mathbb {O} $)
Split
types
• Over $\mathbb {R} $:
• Split-complex numbers
• Split-quaternions
• Split-octonions
Over $\mathbb {C} $:
• Bicomplex numbers
• Biquaternions
• Bioctonions
Other hypercomplex
• Dual numbers
• Dual quaternions
• Dual-complex numbers
• Hyperbolic quaternions
• Sedenions ($\mathbb {S} $)
• Split-biquaternions
• Multicomplex numbers
• Geometric algebra/Clifford algebra
• Algebra of physical space
• Spacetime algebra
Other types
• Cardinal numbers
• Extended natural numbers
• Irrational numbers
• Fuzzy numbers
• Hyperreal numbers
• Levi-Civita field
• Surreal numbers
• Transcendental numbers
• Ordinal numbers
• p-adic numbers (p-adic solenoids)
• Supernatural numbers
• Profinite integers
• Superreal numbers
• Normal numbers
• Classification
• List
Authority control
National
• France
• BnF data
• Germany
• Czech Republic
Other
• IdRef
| Wikipedia |
Weakly-Ionized Air Plasma Theory
For the plasma technologies here considered (aerodynamic flow control, MHD power generation, energy bypass in pulse detonation engines), the airflow is expected to be ionized artificially through electron beams, microwaves, or strong applied electric fields. Since the cost of ionization for the air molecules is rather large, only a small fraction of the gas can be ionized in order to keep the power requirements to a reasonable level. Hence why the plasma can be considered weakly-ionized. Similarly to strongly-ionized plasmas, weakly-ionized plasmas require the simultaneous solution of the mass, momentum, and energy equations for the neutrals and the charged species, as well as of the Maxwell equations for the electric and magnetic fields. However, because of the low ionization fraction of weakly-ionized plasmas, the electrical conductivity is expected to be quite small and the plasma becomes collision-dominated. Under such conditions, the governing equations take on a very different formulation as those describing strongly-ionized plasmas. A brief outline of the chemical model, of the charged species transport equations, of the neutrals transport equations and of the electric field potential equation applicable to weakly-ionized air is here given.
The degree of ionization of the air plasma as well as its chemical composition can be predicted using a finite rate nonequilibrium 8-species 28-reactions model as outlined below in Table 1. The model [2] is especially suited to air plasmas ionized by electron beams. Additionally to chemical reactions related to electron-beam ionization (see reactions 7a and 7b), the model also includes chemical reactions related to Townsend ionization (specifically reactions 1a and 1b).
Townsend ionization consists of an electron accelerated by an electric field impacting the nitrogen or oxygen molecules and releasing in the process a new electron and a positive ion. This chemical reaction is the physical phenomenon that is at the origin of sparks and lighting bolts and that occurs in a weakly-ionized plasma whenever the electric field reaches very high values. It needs to be included in the chemical model when solving plasma aerodynamics in order to predict correctly the voltage drop within the cathode sheaths. Cathode sheaths are thin regions near the cathodes where the electric field is particularly high due to the current being mostly ionic.
Charged Species Transport Equations
The mass-conservation transport equations for the charged species must contain chemical source terms to account for ion and electron creation and destruction as well as other chemical reactions taking place in air: $$ \frac{\partial}{\partial t} \rho_k + \sum_{j} \frac{\partial }{\partial x_j} \rho_k \boldsymbol{V}_j^k = W_k $$ with $\rho_k$ the density of the $k$th species, $\boldsymbol{V}^k$ the velocity of the species under consideration including both drift and diffusion, and $W_k$ the chemical source terms. The chemical source terms are determined from the chemical reactions taking place in weakly-ionized air (see Table 1 above). The charged species velocity can be obtained from the momentum equation assuming negligible ion and electron inertia compared to the collision forces as follows (see Ref. [1] for details): $$ \boldsymbol{V}^{k}_i = \boldsymbol{V}^{\rm n}_i + \sum_{j=1}^3 s_k \tilde{\mu}^k_{ij} \left( \boldsymbol{E} + \boldsymbol{V}^{\rm n} \times \boldsymbol{B} \right)_j - \sum_{j=1}^3 \frac{\tilde{\mu}^{k}_{ij}}{|C_k| N_k} \frac{\partial P_k}{\partial x_j} $$ where $\boldsymbol{E}$ is the electric field, $\boldsymbol{V}^{\rm n}$ is the neutrals velocity including drift and diffusion, $N_k$ is the number density, $P_k$ is the partial pressure of species $k$, $s_k$ the sign of species $k$ (equal to +1 for the positive ions and to -1 for the electrons and negative ions), $C_k$ the charge of species $k$ (equal to $-e$ for the electrons, $+e$ for the positive ions, $-e$ for the negative ions, etc), and where $\tilde{\mu}$ is the tensor mobility equal to: $$ \tilde{\mu}^k \equiv\frac{\mu_k}{1+\mu_k^2|\boldsymbol{B}|^2} \left[\begin{array}{c} 1+\mu_k^2 \boldsymbol{B}_1^2 \\ \mu_k^2\boldsymbol{B}_1\boldsymbol{B}_2-s_k\mu_k\boldsymbol{B}_3 \\ \mu_k^2\boldsymbol{B}_1\boldsymbol{B}_3 +s_k\mu_k\boldsymbol{B}_2 \end{array} \right. \left.\begin{array}{c} \mu_k^2\boldsymbol{B}_1\boldsymbol{B}_2+s_k \mu_k \boldsymbol{B}_3 \\ 1+\mu_k^2\boldsymbol{B}_2^2 \\ \mu_k^2 \boldsymbol{B}_2\boldsymbol{B}_3-s_k\mu_k\boldsymbol{B}_1 \end{array}\right. \left. \begin{array}{c} \mu_k^2\boldsymbol{B}_1\boldsymbol{B}_3-s_k \mu_k \boldsymbol{B}_2 \\ \mu_k^2\boldsymbol{B}_2\boldsymbol{B}_3+s_k\mu_k\boldsymbol{B}_1 \\ 1+\mu_k^2\boldsymbol{B}_3^2 \end{array} \right] $$ where $\mu_k$ is the mobility of species $k$ and $\boldsymbol{B}$ the magnetic field vector.
Neutrals Mass Conservation Equation
The mass-conservation transport equations for the neutral molecules must contain chemical source terms to account for ion creation and destruction as well as other chemical reactions taking place in air: $$ \frac{\partial}{\partial t} \rho_k + \sum_{j} \frac{\partial }{\partial x_j} \rho_k \boldsymbol{V}^{\rm n}_j - \underbrace{\sum_j \frac{\partial}{\partial x_j}\left(\nu_k \frac{\partial w_k}{\partial x_j} \right)}_{\textrm{diffusion terms}} ={W_k} $$ with $\boldsymbol{V}^{\rm n}_j$ the bulk velocity of the neutrals, $w_k$ the mass fraction and $\nu_k$ the diffusion coefficient. The diffusion terms are here limited to the diffusion of the neutrals species within each other and neglect the diffusion of the neutrals within the charged species. Such is an excellent approximation as long as the plasma remains weakly-ionized (i.e, the ionization fraction should remain less than $10^{-4}$ or so).
Total Momentum Conservation Equation
The total momentum equation for the plasma is obtained by adding the momentum equations for the neutrals and the charged species: $$ \frac{\partial}{\partial t} \rho \boldsymbol{V}^{\rm n}_i + \sum_j \frac{\partial }{\partial x_j} \rho \boldsymbol{V}^{\rm n}_j \boldsymbol{V}^{\rm n}_i + \frac{\partial P}{\partial x_i} = \underbrace{\sum_j \frac{\partial \tau_{ji}}{\partial x_j}}_\textrm{viscous force} + \underbrace{\rho_{\rm c} \boldsymbol{E}_i }_{\rm EHD~force} + \underbrace{ \left(\boldsymbol{J} \times \boldsymbol{B}\right)_i}_{\rm MHD~force} $$ with $P$ the total pressure of the gas including the electron and ion partial pressures and $\tau_{ji}$ the shear stress tensor and $\rho_{\rm c}$ and $\boldsymbol{J}$ are the net charge density and current density defined as: $$ \rho_{\rm c} \equiv \sum_k N_k C_k $$ $$ \boldsymbol{J}_i\equiv \sum_k C_k N_k \boldsymbol{V}^k_i $$ Also known as the Lorentz force, the MHD force occurs as a result of the magnetic field acting on the charges in motion, and can hence only take place when a current is flowing within the gas. On the other hand, the EHD force occurs as a result of the electric field acting on a non-neutral region of the plasma. The momentum imparted to the charged particules by the MHD and EHD forces is then transferred to the bulk of the gas through collisions between the charged particules and the neutrals.
Vibrational Energy Conservation Equation
A particularity of the relatively-low-temperature weakly-ionized plasmas is the nonequilibrium of the electron temperature and vibrational temperature with respect to the translational temperature of the neutrals. An example of the large degree of thermal nonequilibrium near an electrode can be seen below in Fig. 1.
Figure 1. The large degree of thermal nonequilibrium typical of cold plasmas can here be seen through the temperature, vibrational temperature, and electron temperature near an electrode as computed by Parent et al. [2].
Because the nitrogen vibrational energy relaxation distance can reach several centimeters or even meters at the low translational temperatures typical of weakly-ionized air plasmas, it is necessary to solve a separate equation accounting for the transport of the nitrogen vibrational energy: [38,39] $$ \frac{\partial}{\partial t} \rho_{\rm N_2} e_{\rm v} + \sum_j \frac{\partial }{\partial x_j} \left( \rho_{\rm N_2} \boldsymbol{V}^{\rm n}_j e_{\rm v} -e_{\rm v} \nu_{\rm N_2} \frac{\partial w_{\rm N_2}}{\partial x_j} +q^{\rm v}_j \right)\\ = \eta_{\rm v} \underbrace{ Q_{\rm J}^{\rm e} }_{\begin{array}{l} \rm Joule\\ \rm Heating \end{array} } + {\frac{\rho_{\rm N_2}}{\tau_{\rm vt}}}\left( e_{\rm v}^0 -e_{\rm v} \right) + W_{\rm N_2} e_{\rm v} $$ where $e_{\rm v}$ is the nitrogen vibrational energy, $e_{\rm v}^0$ is the nitrogen vibrational energy that would be obtained should $T_{\rm v}=T$, $q^{\rm v}$ the vibrational energy heat flux, and $Q_{\rm J}^{\rm e}$ the Joule heating due to the electron velocity being different from the velocity of the bulk of the plasma. The fraction of the Joule heating consumed in the excitation of the vibration levels of the nitrogen molecule, $\eta_{\rm v}$, is obtained from the electron temperature as shown below in Fig. 2.
Figure 2. Fraction of energy consumed in the excitation of the vibration levels of the nitrogen molecule as a function of the electron temperature. From Ref. [36] and Ch. 21 of Ref. [37].
The nitrogen vibrational energy is hence seen to be highly dependent on the Joule heating especially when the electron temperature is in the range 7000-30,000 K. Because weakly-ionized air plasmas often exhibit an electron temperature in that range, a large amount of the Joule heating typically gets deposited in form of nitrogen vibrational energy. Because the relaxation time $\tau_{\rm vt}$ of the nitrogen vibrational temperature is quite low in air in typical flight conditions [38,39], there is not enough time for most of the Joule heating to be transferred from the vibrational energy modes to the translational energy modes. Then, the heating does not result in a significant decrease of the gas density. This is a desirable feature when solving MHD generator flowfields since it can limit the negative effects of large density gradients on the generator performance. However, this may not be a desirable feature when trying to perform aerodynamic flow control through heat deposition because the latter performs satisfactorily only if a density gradient is created by the heating process (which would occur only if the Joule heating is converted to translational energy of the neutrals).
Electron Energy Conservation Equation
Because the electron temperature is in significant non-equilibrium with the neutrals temperature ($T_{\rm e}$ is typically 10-100 times higher than $T$), it is necessary to solve an additional transport equation for the electron energy. The electron energy transport equation (as outlined in Ref. [62]) can be derived from the first law of thermo applied to the electron fluid and substituting the pressure gradient from the momentum equation for the electron species shown above. We thus obtain: $$ \frac{\partial }{\partial t} \left( \rho_{\rm e} e_{\rm e} \right) + \sum_{i} \frac{\partial }{\partial x_i} \left(\rho_{\rm e} h_{\rm e} \boldsymbol{V}_i^{\rm e} \right) + \sum_{i} \frac{\partial q_i^{\rm e}}{\partial x_i} = W_{\rm e} e_{\rm e}+C_{\rm e} N_{\rm e}\boldsymbol{V}^{\rm e} \cdot \boldsymbol{E} - \frac{3 e P_{\rm e} \zeta_{\rm e}}{2 m_{\rm e} \mu_{\rm e}} - Q_{\rm ei} $$ where $\rho_{\rm e}$ is the electron density, $e_{\rm e}$ the electron translational energy, $h_{\rm e}$ the electron enthalpy, $q_i^{\rm e}$ the electron heat flux, $\zeta_{\rm e}$ the electron energy loss function, $P_{\rm e}$ the electron partial pressure, $m_{\rm e}$ the mass of one electron, $\mu_{\rm e}$ the electron mobility, $e$ the elementary charge, and $Q_{\rm ei}$ the energy the electrons lose per unit time per unit volume in creating new electrons through Townsend ionization. In the latter, the kinetic energy of the electrons does not appear, which is a consequence of the electron momentum equation not including the inertia terms, a valid assumption as long as the plasma remains weakly-ionized. It is noted that this does not necessarily entail that the change in kinetic energy of the electrons is negligible compared to the change in internal energy. In fact, the kinetic energy of the electrons is not negligible within cathode sheaths even when the plasma is weakly-ionized. But, including the kinetic energy terms would not improve the accuracy of the simulation in this case. In fact, including them would result in increased physical error because when combined with the momentum equation in which the inertia terms are neglected, the electron energy equation would not satisfy the first law of thermo. Because the inertia terms part of the momentum equation are neglected, the kinetic energy terms should also be neglected for the energy transport equation to satisfy the first law.
Total Energy Conservation Equation
The neutrals and ions translational temperature can be determined through the total energy transport equation which can be derived by summing the energy equations for each species as obtained from the first law of thermo and then making some simplifications applicable to a weakly-ionized plasma. The following is thus obtained: $$ \begin{array}{l}\displaystyle \frac{\partial }{\partial t}\left(\rho_{\rm N_2} e_{\rm v}+\sum_k \rho_k (e_k+h_k^\circ)+\frac{1}{2}\rho|\boldsymbol{V}^{\rm n}|^2 \right) \\ \displaystyle + \sum_{j} \frac{\partial }{\partial x_j} \left(\rho_{\rm N_2} \boldsymbol{V}_j^{\rm N_2} e_{\rm v} + \sum_{k} \rho_k \boldsymbol{V}^k_j (h_k+h_k^\circ)+\frac{1}{2}\rho \boldsymbol{V}^{\rm n}_j|\boldsymbol{V}^{\rm n}|^2 \right)\\ \displaystyle = -\sum_{i} \frac{\partial q_i}{\partial x_i} +\sum_{i} \sum_{j} \frac{\partial }{\partial x_j} \tau_{ji} \boldsymbol{V}_i^{\rm n} + \boldsymbol{E}\cdot\boldsymbol{J} + Q_{\rm b} \end{array} $$ where $Q_{\rm b}$ corresponds to the energy deposited to the gas by an external ionizer (such as electron beams, microwaves, laser beams, etc.), where $q_i$ is the total heat flux from the charged species and the neutrals, where $h_k$ is the species enthalpy and where $h_k^\circ$ is the species heat of formation. The species energy and enthalpy contains the translational, rotational, vibrational, and electronic energies at equilibrium. For all heavy species (the heavy species here refer to all ions and neutrals but do not include electrons) except nitrogen the translational, rotational, vibrational, and electronic energies are assumed to be at equilibrium at the temperature $T$; for nitrogen, the vibrational energy is determined from a separate transport equation, as outlined previously; for the electrons, the translation energy is determined from the electron energy transport equation.
Electric Field Potential Equation
In the momentum and energy equations outlined in the previous section, the electric and magnetic fields appeared as part of the MHD force, the EHD force, or the Joule heating. The electric and magnetic fields must hence be determined simultaneously to the fluid flow equations to close the system of equations. This can be done by solving the Maxwell equations. The Maxwell equations are particularly complex to solve as they involve the solution of 3 transport equations for the magnetic field and 3 other transport equations for the electric field. However, they can be reduced to simpler form by making some assumptions applicable to a weakly-ionized plasma. Indeed, because of the low ionization fraction of weakly-ionized plasmas, the electrical conductivity is expected to be quite small, leading in turn to a very small magnetic Reynolds number. At a small magnetic Reynolds number, the induced magnetic field can be assumed to be negligible whether or not an external magnetic field (originating from a permanent or electro magnet) is applied. Then, the partial differential equations solving for the induced magnetic fluxes do not need to be solved. We can simplify further the physical model by assuming steady-state of the electromagnetic fields with respect to the fluid flow (the so-called "electrostatic" assumption). The assumption of a steady-state for the electromagnetic fields is an excellent one for a weakly-ionized plasma even when solving unsteady fluid flows, since the flow speed and sound speed are considerably less than the electromagnetic wave speed (the speed of light). At steady-state, the curl of the electric field would be zero, and an electric field potential would exist. Then, the 3 transport equations for the electric field can be dropped in favour of one equation: the electric field potential equation. For a quasi-neutral weakly-ionized plasma, the electric field potential equation can thus be obtained from Gauss's law as follows: $$ \sum_{j=1}^3 \frac{\partial^2 \phi}{\partial x_j^2} = -\frac{1}{\epsilon_0} \sum_k C_k N_k $$ from which the electric field can be obtained as $ \boldsymbol{E}_j = - {\partial \phi}/{\partial x_j}$.
Limitations of the Physical Model
The physical model outlined above is hence valid both in the non-neutral sheaths and the quasi-neutral regions of weakly-ionized plasmas, and can predict accurately physical phenomena such as ambipolar diffusion, ambipolar drift, cathode sheaths, dielectric sheaths, unsteady effects in which the displacement current is significant, etc. Nonetheless, it is noted that the physical model considered herein makes several assumptions: (i) the induced magnetic field is assumed negligible, (ii) the drag force due to collisions between charged species is negligible compared to the one originating from collisions between charged species and neutrals, and (iii) the forces due to inertia change are assumed small compared to the forces due to collisions. The mathematical expressions for the latter forces as well as the justification for neglecting them when simulating weakly-ionized plasmas can be found in Ref. [1]. Finally it is cautioned that, because the electric field is obtained from Gauss's law, the physical model outlined in this section can not be used to tackle problems where the electric field is a significant function of a time-varying magnetic field, such as in inductively coupled plasmas or microwave induced plasmas. In those cases, the electric field would cease to be a potential field and would need to be determined through the full or simplified Maxwell equations. More details on when Gauss's law can and can not be used to determine the electric field can be found in Refs. [62,63].
[1] B Parent, MN Shneider, and SO Macheret, "Generalized Ohm's Law and Potential Equation in Computational Weakly-Ionized Plasmadynamics," Journal of Computational Physics, Vol. 230, No. 4, 2011, pp. 1439–1453.
[2] B Parent, SO Macheret, MN Shneider, and N Harada, "Numerical Study of an Electron-Beam-Confined Faraday Accelerator," Journal of Propulsion and Power, Vol. 23, No. 5, 2007, pp. 1023–1032.
[30] B Parent, SO Macheret, and MN Shneider, "Electron and Ion Transport Equations in Computational Weakly-Ionized Plasmadynamics," Journal of Computational Physics, Vol. 259, 2014, pp. 51–69.
[31] NL Aleksandrov, EM Bazelyan, IV Kochetov, and NA Dyatko, "The Ionization Kinetics and Electric Field in the Leader Channel in Long Air Gaps," Journal of Physics D Applied Physics, Vol. 30, 1997, pp. 1616–1624.
[32] A Kossyi, AY Kostinsky, AA Matveyev, and VP Silakov, "Kinetic Scheme of the Non-Equilibrium Discharge in Nitrogen-Oxygen Mixtures," Plasma Sources Science and Technology, Vol. 1, 1992, pp. 207–220.
[33] EM Bazelyan and YP Raizer, Spark Discharge, CRC, Boca Raton, Florida, 1997.
[34] YI Bychkov, YD Korolev, and GA Mesyats, Inzhektsionnaia Gazovaia Elektronika, Nauka, Novosibirsk, Russia, 1982, (Injection Gaseous Electronics, in Russian).
[35] OE Krivonosova, SA Losev, VP Nalivayko, YK Mukoseev, and OP Shatalov, Khimiia Plazmy [Plasma Chemistry], edited by B. M. Smirnov, Vol. 14, Energoatomizdat, Moscow, Russia, 1987, p. 3.
[36] NL Aleksandrov, FI Vysikailo, RS Islamov, IV Kochetov, AP Napartovich, and VG Pevgov, "Electron Distribution Function in 4:1 N2-O2 Mixture," High Temperature, Vol. 19, No. 1, 1981, pp. 17–21.
[37] IS Grigoriev and EZ Meilikhov, Handbook of Physical Quantities, CRC, Boca Raton, Florida, 1997.
[38] SO Macheret, MN Shneider, and RB Miles, "Electron-Beam-Generated Plasmas in Hypersonic Magnetohydrodynamic Channels," AIAA Journal, Vol. 39, No. 6, 2001, pp. 1127–1138.
[39] SO Macheret, L Martinelli, and RB Miles, "Shock Wave Propagation and Structure in Non-Uniform Gases and Plasmas," 1999, AIAA Paper 99-0598.
[62] YP Raizer, Gas Discharge Physics, Springer-Verlag, Berlin, Germany, 1991.
[63] YP Raizer, MN Shneider, and NA Yatsenko, Radio-Frequency Capacitive Discharges, CRC Press, U.S.A., 1995. | CommonCrawl |
\begin{document}
\title{ Double variational principle for mean dimensions with sub-additive potentials
\footnotetext {*Corresponding author}
\footnotetext {2010 Mathematics Subject Classification: 37B40, 37C45}}
\author{ Yunping Wang, Ercai Chen*\\ \small School of Mathematical Sciences and Institute of Mathematics, Nanjing Normal University,\\ \small Nanjing 210046, Jiangsu, P.R. China\\
\small [email protected],
[email protected] }
\date{}
\maketitle{} \begin{abstract} In this paper, we introduce mean dimension quantities with sub-additive potentials. We define mean dimension with sub-additive potentials and mean metric dimension with sub-additive potentials, and establish a double variational principle for sub-additive potentials. \end{abstract} \noindent \textbf{ Keywords:} mean dimension, rate distortion dimension, sub-additive potentials, variational principle
\section{Introduction} \subsection{Backgrounds} A pair $(\mathcal{X}, T)$ is called a dynamical system if $\mathcal{X}$ is a compact metrizable space with metric $d$ and $T: \mathcal{X}\rightarrow X$ is a homeomorphism. In classic ergodic theory, measure theoretic entropy and topological entropy are important determinants of complexity in dynamical systems. The important relationship between these two quantities is the well-know variational principle.
Topological pressure is a generalization of topological entropy for a dynamical system. The concept was first introduced by Ruelle \cite{Rue} in 1973 for expansive maps acting on compact metric spaces. And he set up a variational principle for the topological pressure in the same paper. In \cite{Wal}, Walter generalized theses results to general continuous maps on a compact metric spaces. Given a continuous map $T: \mathcal{X} \rightarrow \mathcal{X}$ on a compact metric space, the topological pressure of a continuous function $\varphi: \mathcal{X}\rightarrow \mathbb{R}$ is defined by \begin{align*} P(\varphi, T)=\lim\limits_{\epsilon\rightarrow 0} \limsup\limits_{n\rightarrow \infty} \dfrac{1}{n} \log \sup\limits_{E} \sum\limits_{x\in E}\exp \sum\limits_{i=0}^{n-1}\varphi(T^{i}x), \end{align*} with the supremum taken over all $(n, \epsilon)$-separated sets $E\subset X$. We recall that a set $E\subset X$ is said to be $(n, \epsilon)$-separated if for any $x, y\in E$ with $x\neq y$ there exists $k\in \left\lbrace 0, \cdots, n-1 \right\rbrace $ such that $d(T^{i}x, T^{i}y)>\epsilon$. Take $\varphi=0$ we recover the notion of the topological entropy $h(T)$ of the map $T$ given by \begin{align*} h(T)=\lim\limits_{\epsilon \rightarrow 0}\limsup\limits_{n\rightarrow \infty} \dfrac{1}{n} \log N(n, \epsilon) \end{align*} where $N(n, \epsilon)$ denotes the maximal cardinality of an $(n,\epsilon)$-separated set. The variational principle formulated by Walter can be stated precisely as follows: \begin{align*} P(\varphi, T)=\sup\limits_{\mu}\left( h_{\mu}(T)+\int_{X} \varphi d \mu \right), \end{align*} with the supremum taken over all $T$-invariant probability measure $\mu$ on $\mathcal{X}$, and $h_{\mu}(T)$ denotes the measure-theoretical entropy of $\mu$.
The theories of topological pressure, variational principle and equilibrium states play a fundamental role in statistical mechanics, ergodic theory and dynamical systems (see \cite{Bow}, \cite{Kel}, \cite{Rue1}, \cite{WP}). Since the works of Bowen \cite{Bow79} and Ruelle \cite{Rue2}, the topological pressure has become a basic tool for studying dimension in conformal dynamical systems. In 1984, Pesin and Pitskel \cite{PS32} defined the topological pressure of additive potentials for non-compact subsets of compact metric spaces and proved the variational principle under some supplementary conditions. In 1988, the sub-additive thermodynamic formalism was introduced by Falconer in \cite{Fal} and he proved the variational principle for topological pressure under some Lipschitz conditions and bounded distortion assumption on the sub-additive potentials. In 1996, Barreira \cite{Bar} defined the topological pressure for an arbitrary sequence of continuous functions on a arbitrary subset of compact metric spaces and proved the variational principle under a strong convergence assumption on the potentials which extended the work of Pesin and Pitskel. Cao, Feng and Huang \cite{Cao} introduced the sub-additive topological pressure via separated sets in \cite{Cao} on general compact metric spaces, and obtained the variational principle for sub-additive potentials without any additional assumptions on the sub-additive potentials. For more research on sub-additive topological pressure, refer to the literatures \cite{ Zhang, LB, Zhao, Yun}.
Mean dimension is a conjugacy invariant of dynamical systems which was first introduced by Gromov \cite{GRO}. In 2000, Lindenstrauss and Weiss \cite{LWE} used it to answer an open question raised by Auslander \cite{AU} that whether every minimal system $(\mathcal{X},T)$ can be imbedded in $[0, 1]^{\mathbb{Z}}$. It turns out that mean dimension is the right invariant to study for the problem of existence of an embedding into $(([0,1]^{D})^{\mathbb Z}, \sigma)$. Mean dimesion can be applied to solve imbedding problems in dynamical systems (see \cite{GT}, \cite{LT14},\cite{GLT16}). The metric mean dimension was introduced in \cite{LWE} and they proved that metric mean dimension is an upper bound of the mean dimension. It allowed them to establish the relationship between the mean dimension and the topological entropy of dynamical systems, which shows that each system with finite topological entropy has zero mean dimension. This invariant enables one to distinguish systems with infinite topological entropy. In \cite{LT18}, Lindenstrauss and Tsukamoto established new variational principles connecting rate distortion function to metric mean dimension, which reveals a close relation between mean dimension and rate distortion theory. This was further developed by \cite{LT}. They injected ergodic-theoretic concepts into mean dimension and developed a double variational principle between mean dimension and rate distortion dimension. They proved the mean dimension is equaled to the rate distortion dimension with respect to two variables (metric and measures). Recently, Tsukamoto \cite{MT} introduced a mean dimension analogue of topological pressure and proved the pressure version of double variational principle which extended the results of \cite{LT}. The variational principle formulated by Tsukamoto can be stated precisely as follows: \begin{theorem} \label{main2}
Let $(\mathcal{X}, T)$ be a dynamical system with the marker property and let $\varphi: \mathcal{X} \rightarrow \mathbb{R}$ be a continuous function. Then
\begin{align*}
{\rm mdim}(\mathcal{X}, T, \varphi)&=\min\limits_{d \in \mathcal{D}(\mathcal{X})} \sup\limits_{\mu \in M(\mathcal{X},T)}\left( \overline{{\rm rdim}}(\mathcal{X}, T, d, \varphi, \mu)+ \int_{\mathcal{X}} \varphi d \mu\right) \\&=\min\limits_{d \in \mathcal{D}(\mathcal{X})} \sup\limits_{\mu \in M(\mathcal{X},T)}\left( \underline{{\rm rdim}}(\mathcal{X}, T, d, \varphi, \mu)+ \int_{\mathcal{X}} \varphi d \mu\right)
\end{align*} \end{theorem} \noindent The proof of Theorem \ref{main2} is along the following steps: \begin{itemize}
\item [1.] Define metric mean dimension with potential and prove metric mean dimension with potential bounds rate distortion dimension plus function integral.
\item [2.] Define mean Hausdorff dimension with potential and construct a invariant measure by Frostman's lemma \cite{How95}.
\item[3.] Prove the dynamical version of Pontrjagin-Schnirelmann's theorem \cite{PS32}: for a compact metrizable space $\mathcal{X}$ they can construct a metric $d$ on it for which the upper metric dimension with potential is equal to the topological dimension with potential.
\end{itemize}
In this paper, we will introduce mean dimension quantities with sub-additive potential (mean dimension with sub-additive potential, metric mean dimension with sub-additive potential, mean Hausdorff dimension with sub-additive) and apply Tsukamoto's steps to prove a double variational principle with sub-additive potentials. We should emphasize here that technical difficulties arising from sub-additive potentials need to overcome. The paper is organized as follows. In Section \ref{mutal}, we introduce mean dimension quantities for sub-additive potentials and recall some basic properties of mutual information. In Section\ref{a}, we prove Theorem \ref{bound} and Proposition \ref{pro}. In Section \ref{b}, we give a proof of Theorem \ref{thmc}. In Section \ref{d}, we give the proof of Theorem \ref{aa}. \subsection{Statement of the main result }
\begin{defn}
A dynamical system $(\mathcal{X}, T)$ is said to have the marker property if for any $N>0$, there exists an open set $U\subset \mathcal{X}$ satisfying
$$\mathcal{X}=\bigcup\limits_{n\in \mathbb{Z}} T^{-n} U, ~~U\cap T^{-n}U=\emptyset~(\forall 1\leq n \leq N).$$ \end{defn} \begin{defn}
A sequence $\mathcal{F}=\left\lbrace \varphi_{n} \right\rbrace_{n=1}^{\infty}$ of functions on $\mathcal{X}$ is called sub-additive if each $\varphi_{n}$ is continuous real-value function on $\mathcal{X}$ such that
$$\varphi_{n+m}(x)\leq \varphi_{n}(x)+\varphi_{m}(T^{n}x),~\forall x\in \mathcal{X}, m,n\in \mathbb{N}.$$
\end{defn} For a $T$-invariant Borel probability measure $\mu$, denote $$ \mathcal{F}_{*}(\mu)=\lim\limits_{n\rightarrow \infty} \dfrac{1}{n} \int \varphi_{n} d \mu.$$ The existence of the above limit follows from a sub-additive argument. We call $\mathcal{F}_{*}(\mu)$ the Lyapunov exponent of $\mathcal{F}$ with respect to $\mu$. It also takes a value in $[-\infty, \infty)$.
Let ${\rm var}_{\epsilon}(\varphi, d)=\sup\{ |\varphi(x)-\varphi(y)|,~ d(x,y)<\epsilon\}.$ If $\mathcal{F}=\left\lbrace \varphi_{n} \right\rbrace_{n=1}^{\infty} $ satisfies the following assumption: \begin{align*} \lim\limits_{ \epsilon \rightarrow 0} \limsup\limits_{n\rightarrow \infty}\dfrac{ {\rm var}_{\epsilon}(\varphi_{n}, d_{n})}{n}=0 \end{align*} then $\mathcal{F}$ has bounded distortion.
We denote $\mathcal{D}(\mathcal{X})$ and $\mathcal{M}(\mathcal{X}, T)$ the sets of metrics and invariant probability measures on it respectively. As a main result, we obtain the following variational principle. \begin{theorem}\label{main1}
Assume that $\overline{{\rm mdim_{M}}}(\mathcal{X}, T, d)<\infty$ for all $d\in \mathcal{D}(X)$.
Let $\mathcal{F}=\left\lbrace \varphi_{n} \right\rbrace_{n=1}^{\infty} $ be a sub-additive potential with bounded distortion and let $(\mathcal{X}, T)$ be a dynamical system with the maker property. If there exists $K>0$ such that
$|\varphi_{n+1}(x)-\varphi_{n}(x)|\leq K, ~\forall x\in \mathcal{X}~,n\in \mathbb{N}.$ Then
\begin{align*}
{\rm{mdim}}(\mathcal{X},T, \mathcal{F})&=\min_{{\bf d} \in \mathcal{D}(\mathcal{X})}\sup_{\mu \in M(\mathcal{X}, T)}\left( \overline{\rm{rdim}}(\mathcal{X}, T, d, \mu)+ \mathcal{F}_{*}(\mu)\right)
\\&= \min_{{\bf d} \in \mathcal{D}(\mathcal{X})}\sup_{\mu \in M(\mathcal{X}, T)}\left( \underline{\rm{rdim}}(\mathcal{X}, T, d, \mu)+ \mathcal{F}_{*}(\mu)\right).
\end{align*}
\end{theorem} The Theorem \ref{main1} can be obtained from the following theorems.
{\bf Step 1}: prove mean Hausdorff dimension with sub-additive potentials bounds mean dimension with sub-additive potentials and show that the rate-distortion dimension is no more than the metric mean dimension plus the Lyapunov exponent of $\mathcal{F}$. \begin{theorem}({=Theorem \ref{bound}})
Let $(\mathcal{X},T)$ be a dynamical system with a metric $d$, then
$${\rm mdim}_{H}(\mathcal{X},T, d, \mathcal{F})\leq \underline{\rm{mdim}}_{M}(\mathcal{X},T, d, \mathcal{F}).$$ If $\mathcal{F}$ satisfies bounded distortion and there exists $K>0$ such that $|\varphi_{n+1}-\varphi_{n}|<K$ for every $n$, then
$$ {\rm mdim}(\mathcal{X},T, \mathcal{F})\leq {\rm mdim}_{H}(\mathcal{X},T, d, \mathcal{F}).$$ \end{theorem}
\begin{prop}({= Proposition\ref{pro}})
Let $(\mathcal{X}, T)$ be a dynamical system with a metric ${ d}$ and an invariant probability measure $\mu$. Let $\mathcal{F}=\left\lbrace \varphi_{n} \right\rbrace_{n=1}^{\infty} $ be a sub-additive potential such that $\mathcal{F}_{*}(\mu)\neq -\infty$. Then
\begin{align*}
&\overline{\rm{rdim}}(\mathcal{X},T, d, \mu)+\mathcal{F}_{*}(\mu)\leq \overline{\rm{mdim}}_{M}(\mathcal{X},T, d, \mathcal{F}),\\&
\underline{\rm{rdim}}(\mathcal{X},T, d, \mu)+\mathcal{F}_{*}(\mu)\leq \underline{\rm{mdim}}_{M}(\mathcal{X},T, d, \mathcal{F}).
\end{align*} \end{prop} {\bf Step 2:} show that the following results by constructing the measure through a version of dynamical Frostman's lemma. \begin{theorem}(= Theorem\ref{thmc})
Assume that $\overline{{\rm mdim_{M}}}(\mathcal{X}, T, d)<\infty$ for all $d\in \mathcal{D}(X)$ and there exists $K>0$ such that $|\varphi_{n+1}-\varphi_{n}|<K$ for every $n$. Under a mild condition on $d$ (called tame growth of covering numbers)
$$ {\rm mdim}_{H}(\mathcal{X}, T, d, \mathcal{F}) \leq \sup\limits_{\mu \in \mathcal{M}(\mathcal{X}, T)}(\underline{{\rm rdim}}(\mathcal{X}, T, d, \mu)+ \mathcal{F}_{*}(\mu)).$$
\end{theorem} \begin{corollary}
\begin{align*}
{\rm{mdim}}(\mathcal{X},T, \mathcal{F})&\leq \sup\limits_{\mu \in \mathcal{M}(\mathcal{X}, T)}(\underline{{\rm rdim}}(\mathcal{X}, T, d, \mu)+ \mathcal{F}_{*}(\mu)) \\&\leq \sup\limits_{\mu \in \mathcal{M}(\mathcal{X}, T)}(\overline{{\rm rdim}}(\mathcal{X}, T, d, \mu)+ \mathcal{F}_{*}(\mu))\leq\overline{\rm{mdim}}_{M}(\mathcal{X},T, d, \mathcal{F})
\end{align*}
\end{corollary} {\bf Step 3}: construct a metric so that the metric mean dimension is equal to the mean dimension. \begin{theorem}{($\subset$ Theorem \ref {le})}
Let $(\mathcal{X}, T)$ be a dynamical system with a sub-additive potential $\mathcal{F}=\left\lbrace \varphi_{n}\right\rbrace_{n=1}^{\infty}$. Suppose $(\mathcal{X}, T)$ has the marker property and there exists $K>0$ such that $|\varphi_{n+1}-\varphi_{n}|<K$ for every $n$. Then there exists $d\in \mathcal{D}(\mathcal{X})$ such that
$${ \overline{{\rm mdim_{M}}}}(\mathcal{X}, T, d, \mathcal{F} )={\rm mdim}(\mathcal{X}, T, \mathcal{F}).$$ \end{theorem}
\section{Preliminaries}\label{mutal} \subsection{ Mean dimension quantities for sub-additive potentials} In this subsection, we define the mean dimension quantities for sub-additive potentials. First, we recall { local dimension }\cite{MT}. Throughout the paper we assume that simplicial complexes are finite (namely, they have only finitely many simplexes).
Let $P$ be a simplicial complex. For $a\in P$ we define the {\emph local dimension } $ {\rm dim}_{a} P$ as the maximum of ${\rm dim} ~\Delta $ where $ \Delta \subset P$ is a simplex of $P$ containing $a$. Let $(\mathcal{X}, d)$ be a compact metric space and $f: \mathcal{X} \rightarrow \mathcal{Y} $ a continuous map into some topological space $\mathcal{Y}$. For $\epsilon >0$ we call the map $f$ an $\epsilon$-embedding if ${\rm diam} f^{-1} y < \epsilon$ for all $y\in \mathcal{Y}$. Let $\varphi: \mathcal{X} \rightarrow \mathbb{R}$ be a continuous function. We define the $\epsilon${\emph-width dimension with potential} by \begin{align*} \rm{Widim}_{\epsilon}(\mathcal{X}, d, \varphi)=\inf\{
\max\limits_{x\in \mathcal{X}}({\rm dim}_{f(x)} P + \varphi(x))| ~&P ~\text {is a simplicial complex and }\\& f: \mathcal{X} \rightarrow P ~\text{is an } \epsilon\text{-embedding} \}. \end{align*} Let $T: \mathcal{X} \rightarrow \mathcal{X}$ be a homeomorphism. For $N>0$ we define a metric $d_{N}$ by $$ d_{N}(x,y)= \max\limits_{0\leq n < N } d(T^{n}x, T^{n}y)~~(x, y \in X).$$ We define the {\emph mean topological dimension for sub-additive potentials} by \begin{align}\label{mdim} {\rm mdim} (\mathcal{X}, T, \mathcal{F})= \lim\limits_{\epsilon \rightarrow 0} \left( \lim\limits_{N\rightarrow \infty} \dfrac{{\rm Widim}_{\epsilon}(\mathcal{X}, d_{N}, \varphi_{N})}{N}\right) . \end{align} The limits exist because the quantity ${\rm Widim}_{\epsilon}(\mathcal{X},d_{N}, \varphi_{N}) $ is subadditive in $N$ and monotone in $\epsilon$. The value of ${\rm mdim} (\mathcal{X}, T, \varphi)$ is independent of the choice of $d$. Namely it becomes a topological invariant of $(\mathcal{X},T)$. So we drop $d$ from the notation. When $\varphi=0$, the above (\ref{mdim}) specializes to the standard mean topological dimension: ${\rm mdim}(\mathcal{X}, T, 0)= {\rm mdim} (\mathcal{X}, T).$
The metric mean dimension for sub-additive potentials is defined as follows. Let $(\mathcal{X}, d)$ be a compact metric space with a continuous function $\varphi: \mathcal{X} \rightarrow \mathbb{R}$. For $\epsilon> 0$, we set \begin{align*} \#(\mathcal{X}, d, \varphi, \epsilon)=\inf \{ \sum_{i=1}^{n} (1/\epsilon)^{\sup_{U_{i}}\varphi}\mid ~ \mathcal{X}= U_{1} \cup \cdots \cup U_{n} ~&\text{ is an open cover with } \\&{\rm diam}~U_{i} < \epsilon ~\text{for all} ~1\leq i \leq n \}. \end{align*} Given a homeomorphism $T: \mathcal{X} \rightarrow \mathcal{X}$, we set $$ P(\mathcal{X}, T, d, \mathcal{F}, \epsilon)= \lim\limits_{N\rightarrow \infty} \dfrac{\log \# (\mathcal{X}, d_{N}, \varphi_{n}, \epsilon)}{N}.$$ This limit exists because $\log \#(\mathcal{X} , d_{N}, \varphi_{n}, \epsilon)$ is subadditive in $N$.
We define the { upper and lower metric mean dimension with sub-additive potentials} by $$\overline{{\rm mdim_{M}}}(\mathcal{X}, T, d, \mathcal{F})= \limsup\limits_{\epsilon \rightarrow 0} \dfrac{P(\mathcal{X},T,d, \mathcal{F}, \epsilon)}{\log (1/ \epsilon)},$$ $$\underline{{\rm mdim_{M}}}(\mathcal{X}, T, d, \mathcal{F})= \liminf\limits_{\epsilon \rightarrow 0} \dfrac{P(\mathcal{X},T,d, \mathcal{F}, \epsilon)}{\log (1/ \epsilon)}.$$ When the upper and lower limits coincide, we denote the common value by ${\rm{mdim}}_{M}(\mathcal{X},T,d, \mathcal{F}).$
For $\epsilon>0$ and $s\geq \max\limits_{\mathcal{X}}\varphi$, we set \begin{align*}
H_{\epsilon}^{s}(X,d,\varphi)=\inf\left\lbrace \sum_{i=1}^{\infty}({\rm diam} E_{i})^{s-\sup_{E_i}\varphi}| \mathcal{X}=\bigcup\limits_{ i=1}^{\infty}E_{i}~ \text{with} ~{\rm diam} E_{i}<\epsilon ~\text{for all} ~ i \geq 1 \right\rbrace \end{align*} Here we have used the convention that $0^{0}=1$ and $({\rm diam} \emptyset)^{s}=0$ for all $s\geq 0$. Note that this convention implies $H_{\epsilon}^{\max_{ \mathcal{X}}\varphi}(\mathcal{X}, d , \varphi)\geq 1$. We define ${\rm dim}_{H}(\mathcal{X}, { d},\varphi,\epsilon)$ as the supremum of $s\geq \max_{ \mathcal{X}}$ satisfying $H_{\epsilon}^{s}(\mathcal{X}, d, \varphi)\geq 1.$ Given homeomorphism $T:\mathcal{X}\rightarrow \mathcal{X}$, we define the { mean Hausdorff dimension for sub-additive potentials} by \begin{align*} {\rm mdim_{H}}(\mathcal{X}, T, d, \mathcal{F})=\lim\limits_{ \epsilon \rightarrow 0}\left( \limsup\limits_{N\rightarrow \infty} \dfrac{{\rm dim}_{H}(\mathcal{X}, { d_{N}},\varphi_{n},\epsilon)}{N}\right) . \end{align*}
We can also define the { lower mean Hausdorff dimension for sub-additive potentials} ${\rm mdim_{H}}(\mathcal{X}, T, d, \mathcal{F})$ by replacing $\limsup_{N}$ with $\liminf_{N}$ in this definition. But we do not need this concept in the paper. \subsection{Mutual information} In this subsection, we recall some basic properties of mutual information. We omit most of the proofs, which can be found in \cite{LT18}\cite{LT}. Throughout this subsection we fix a probability space $(\Omega, \mathbb{P})$ and assume that all random variables are defined on it. Let $\mathcal{X}$ and $\mathcal{Y}$ be measurable spaces, and let $X$ and $Y$ be random variables taking values in $\mathcal{X}$ and $\mathcal{Y}$ respectively. We define their {\bf mutual information} $I(X,Y)$, which estimates the amount of information shared by $X$ and $Y$.
{\bf Case 1:} Suppose $\mathcal{X}$ and $\mathcal{Y}$ are finite sets. Then we define \begin{align*}
I(X; Y)= H(X)+ H(Y)-H(X,Y)=H(X)- H(X| Y). \end{align*} More explicitly $$ I(X; Y)=\sum\limits_{x\in X, y\in Y} \mathbb{P}(X=x, Y=y)\log\dfrac{\mathbb{P}(X=x, Y=y)}{\mathbb{P}(X=x)\mathbb{P}(Y=y)}.$$ Here we use the convention that $0\log (0/a)=0$ for all $a\leq 0$.
{\bf Case 2:} In general, take measurable maps $f: \mathcal{X} \rightarrow A$ and $g: \mathcal{Y} \rightarrow B$ into finite sets $A$ and $B$. Then we can consider $I(f\circ X; g\circ Y)$ defined by Case 1. We define $ I(X; Y)$ as the supremum of $I(f\circ X; g\circ Y)$ over all finite-range measurable maps $f$ and $g$ defined on $\mathcal{X}$ and $\mathcal{Y}$. This definition is compatible with Case 1 when $\mathcal{X}$ and $\mathcal{Y}$ are finite sets. \begin{lem}[Date-Processing inequality]\label{dp}
Let $X$ and $Y$ be random variables taking values in measurable spaces $ \mathcal{X}$ and $\mathcal{Y}$ respectively. If $f:\mathcal{Y} \rightarrow \mathcal{Z} $ is a measurable map then $I(X; f(Y)) \leq I(X; Y)$. \end{lem}
\begin{rem}\label{key2}
Lemma \ref{dp} implies that, in the definition of the rate distortion function $R_{\mu}(\epsilon)$, we can assume that the random variable $Y$ there takes only finitely many values, namely that its distribution is supported on a finite set. \end{rem}
\begin{lem}\label{le1}
Let $\mathcal{X}$ and $\mathcal{Y}$ be finite sets and let $(X_{n}, Y_{n})$ be a sequence of random variables taking values in $\mathcal{X} \times \mathcal{Y}$. If $(X_{n}, Y_{n})$ converges to some $(X, Y)$ in law, then $I(X_{n}; Y_{n})$ converges to $I(X; Y)$. \end{lem}
\begin{lem}[Subadditivity of mutual information]\label{lems} Let $X, Y, Z$ be random variables taking values in finite sets $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$ respectively. Suppose $X$ and $Y$ are conditionally independent given $Z$. Namely for every $z \in \mathcal{Z}$ with $\mathbb{P}(Z=z)\neq 0$
$$ \mathbb{P}(X=x, Y=y| Z=z)= \mathbb{P}(X=x| Z=z) \mathbb{P}(Y=y| Z=z).$$
Then $I(X, Y ; Z)\leq I(X; Z)+ I(Y; Z).$ \end{lem}
Let $X$ and $Y$ be random variables taking values in finite sets $\mathcal{X}$ and $\mathcal{Y}$. We set $\mu(x)= \mathbb{P}(X=x)$ and $\nu(y | x)= \mathbb{P}(Y=y | X=x)$, where the latter is defined only for $x\in \mathcal{X}$ with $\mathbb{P}(X=x)\neq 0$. The mutual information $I(X ; Y)$ is determined by the distribution of $(X, Y)$, namely $\mu(x)\nu(y| x)$. So we sometimes write $I(X; Y)= I(\mu, \nu).$
\begin{lem}\label{lemc}[Concavity / convexity of mutual information] In this notation, $I(\mu, \nu)$ is a concave function of $\mu(x)$ and a convex function of $\nu(y| x)$. Namely for $ 0\leq t \leq 1$
$$I((1-t)\mu_{1}+ t \mu_{2}, \nu)\geq (1-t)I(\mu_{1}, \nu)+ t I(\mu_{2}, \nu),$$
$$I(\mu, (1-t)\nu_{1}+ t \nu_{2}) \leq (1-t)I(\mu, \nu_{1}) + t I(\mu, \nu_{2}).$$ \end{lem} \begin{lem}[Superadditivity of mutual information]
Let $X, Y, Z$ be measurable maps from $\Omega$ to $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$ respectively. Suppose $X$ and $Z$ are independent. Then
$$ I(Y; X,Z)\geq I(Y; X)+ I(Y; Z).$$ \end{lem} The following lemma is a key to connect geometric measure theory to rate distortion theory\cite{KD94}\cite{LT}. \begin{lem}\label{ml2}
Let $\epsilon$ and $\delta$ be positive numbers with $2\epsilon \log (1/\epsilon)\leq \delta$. Let $0\leq \tau \leq \min(\epsilon/3, \delta/ 2)$ and $s\geq 0$. Let $(\mathcal{X}, d)$ be a compact metric space with a Borel probability measure $\mu$ satisfying
\begin{align}
\mu(E)\leq (\tau + {\rm diam}E)^{s}, ~~~\forall E\subset \mathcal{X} ~\text{with}~{
\rm diam}E<\delta.
\end{align}
Let $X$ and $Y$ be random variables taking values in $\mathcal{X}$ with {\rm Law}(X)=$\mu$ and $\mathbb{E}d(X,Y)<\epsilon.$ Then $$ I(X; Y)\geq s \log (1/\epsilon)- T(s+1).$$
Here $T$ is a universal positive constant independent of $\epsilon$, $\delta, \tau, s, (\mathcal{X}, d), \mu$. \end{lem} \subsection{Rate distortion function} In this subsection, we briefly review rate distortion theory here. Its primary object is data compression of continuous random variables and their process. Continuous random variables always have infinite entropy, so it is impossible to describe them perfectly with only finitely many bits. Instead rate distortion theory studies a lossy data compression method achieving some distortion constrains. For a couple $(X, Y)$ of random variables we denote its mutual information by $I(X,Y)$. Let $(\mathcal{X}, T)$ be a dynamical system with a distance $d$ on $\mathcal{X}$. Take an invariant probability $\mu\in M(\mathcal{X}, T)$. For a positive number $\epsilon$ we define the rate distortion function $R_{\mu}(\epsilon)$ as the infimum of \begin{align}\label{mu} \dfrac{I(X,Y)}{n}, \end{align} where $n$ runs over all natural numbers, and $X$ and $Y=(Y_{0}, \cdots, Y_{n-1})$ are random variables defined on some probability space $(\Omega, \mathbb{P})$ such that \begin{itemize}
\item $X$ takes values in $\mathcal{X}$ and its law is given by $\mu$.
\item Each $Y_{k}$ takes values in $\mathcal{X}$ and $Y$ approximates the process $(X, TX, \cdots, T^{n-1}X)$ in the sense that
\begin{align}\label{co1}
\mathbb{E}\left( \dfrac{1}{n}\sum\limits_{k=0}^{n-1}d(T^{k}X, Y_{k})\right) < \epsilon.
\end{align}
\end{itemize} Here $\mathbb{E}$ is the expectation with respect to the probability measure $\mathbb{P}$. Note that $R_{\mu}(\epsilon)$ depends on the distance $d$ although it is not explicitly written in the notation.
We define the { upper and lower rate distortion dimension} by \begin{align*} &\overline{\rm rdim}(\mathcal{X}, T, d, \mu)=\limsup\limits_{\epsilon\rightarrow 0} \dfrac{R_{\mu}(\epsilon)}{\log (1/ \epsilon)},\\ & \underline{\rm rdim}(\mathcal{X}, T, d, \mu)=\liminf\limits_{\epsilon\rightarrow 0} \dfrac{R_{\mu}(\epsilon)}{\log (1/ \epsilon)}. \end{align*} When the upper and lower limits coincide, we denote their common value ${\rm rdim}(\mathcal{X}, T, d, \mu)$.
\section{Mean Hausdorff dimension with sub-additive potentials bounds mean dimension with sub-additive potentials }\label{a} In this section, we prove Theorem \ref{bound} and Proposition \ref{pro}. The main issue is to prove that Hausdorff dimension with sub-additive potentials bounds mean dimension with sub-additive potentials. \subsection{Proof of Proposition \ref{pro} } \begin{lem}\cite{WP}\label{wp}
Let $a_{1}, \cdots, a_{n}$ be real numbers and ${\bf p}=(p_{1}, \cdots, p_{n})$ a probability vector. For $\epsilon >0$
$$ \sum\limits_{i=1}^{n} (-p_{i} \log p_{i}+ p_{i} a_{i} \log(1/ \epsilon)) \leq \log (\sum\limits_{i=1}^{n}(1/ \epsilon)^{a_{i}})$$ and equality holds iff
$$p_{i}= \dfrac{(1/\epsilon)^{a_{i}}}{\sum_{j=1}^{n} (1/\epsilon)^{a_{j}}}. $$ \end{lem} \begin{prop}{\label{pro}}
Let $(\mathcal{X}, T)$ be a dynamical system with a metric ${ d}$ and an invariant probability measure $\mu$. Let $\mathcal{F}=\left\lbrace \varphi_{n} \right\rbrace_{n=1}^{\infty} $ be a sub-additive potential such that $\mathcal{F}_{*}(\mu)\neq -\infty$. Then
\begin{align*}
&\overline{\rm{rdim}}(\mathcal{X},T, d, \mu)+\mathcal{F}_{*}(\mu)\leq \overline{\rm{mdim}}_{M}(\mathcal{X},T, d, \mathcal{F}),\\&
\underline{\rm{rdim}}(\mathcal{X},T, d, \mu)+\mathcal{F}_{*}(\mu)\leq \underline{\rm{mdim}}_{M}(\mathcal{X},T, d, \mathcal{F}).
\end{align*} \end{prop} \begin{proof}
Let $X$ be a random variable taking values in $\mathcal{X}$ and obeying $\mu$. Let $N>0$ and let $\mathcal{X}=U_{1}\cup\cdots\cup U_{n}$ be an open cover with diam($U_{i}, d_{N})< \epsilon$ for all $i$. Pick $x_{i} \in U_{i}$. We define a random variable $Y$ by
$$Y=(x_{i}, Tx_{i}, \cdots, T^{N-1}x_{i}) ~~~\text{if} ~~X\in U_{i} \setminus (U_{1}\cup\cdots\cup U_{i-1})$$
Obviously
$$ \dfrac{1}{N}\sum\limits_{ k=0}^{N-1}\mathbb{E} d(T^{k}X, Y_{k})<\epsilon.$$
Set $p_{i}=\mu(U_{i}\setminus (U_{1}\cup\cdots \cup U_{i-1})).$ Then
$$ I(X;Y)\leq H(Y) \leq -\sum\limits_{ i=1}^{n} p_{i}\log p_{i}.$$
Set $a_{i}=\sup_{U_{i}} \varphi_{N}.$ It follows that
\begin{align*}
R(d, \mu, \epsilon)+(\dfrac{1}{N}\int_{\mathcal{X}} \varphi_{N} d \mu) \log 1/\epsilon&\leq \dfrac{I(X;Y)}{N}+ (\dfrac{1}{N}\int_{\mathcal{X}} \varphi_{N} d \mu)\log 1/\epsilon\\&\leq \dfrac{1}{N}\sum\limits_{ i=1}^{n}(-p_{i}\log p_{i}+p_{i}a_{i}\log(1/\epsilon))\\&\leq\dfrac{1}{N}\log(\sum\limits_{ i=1}^{n}(\log(1/\epsilon)^{a_{i}}) ~~~\text{by Lemma \ref{wp}.}
\end{align*}
Hence
$$ R(d,\mu, \epsilon)+(\dfrac{1}{N}\int_{\mathcal{X}} \varphi_{N} d \mu) \log {1}/{\epsilon}\leq \dfrac{\log\#(\mathcal{X}, d_{N}, \varphi_{N},\epsilon)}{N}.$$
Let $N\rightarrow \infty$. Then
$$ R(d, \mu, \epsilon)+\mathcal{F}_{*}(\mu)\log{1}/{\epsilon}\leq P(\mathcal{X}, T, d, \mathcal{F}, \epsilon).$$
Divide this by $\log(1/\epsilon)$ and take the limit of $\epsilon\rightarrow 0$. \end{proof}
\subsection{Proof of Theorem \ref{bound}} In order to prove Theorem \ref{bound}, we need to give an additional issue around the quantity ${\rm Widim}_{\epsilon}(\mathcal{X}, d, \varphi)$. Let $P$ be a simplicial complex and $a\in P$. Recall that {\bf small local dimension}(\cite{LT}). $${\rm dim}_{a}^{'}P =\min \left\lbrace {\rm dim} \Delta : \Delta\subset P ~\text{is a simplex containing a} \right\rbrace .$$ The local dimension ${\rm dim}_{a}P$ is a topological quantity. However, the small local dimension ${\rm dim}_{a}^{'}P$ is a combinatorial quantity. It depends on the combinatorial structure of $P$. In \cite{LT}, authors introduced the other definition $\epsilon$-width dimension with potential $ \rm{Widim}_{\epsilon}^{'}(\mathcal{X}, d, \varphi) $ by small local dimension and showed the following result. \begin{lem}\label{ieq1}\cite{LT}
\begin{align*}
{\rm Widim}_{\epsilon}^{'}(\mathcal{X}, d, \varphi)\leq {\rm Widim}_{\epsilon}(\mathcal{X}, d, \varphi)\leq {\rm Widim}_{\epsilon}^{'}(\mathcal{X}, d, \varphi)+{\rm var}_{\epsilon}(\varphi,d)
\end{align*}
where ${\rm var}_{\epsilon}(\varphi,d)=\sup\left\lbrace \left| \varphi(x)-\varphi(y)\right|d(x,y)\leq\epsilon \right\rbrace .$ \end{lem} If we put some bound distortion assumption on $\mathcal{F}$, the we can also show the equivalence of these two quantities.
\begin{prop}\label{proa}
Assume that $\mathcal{F}=\left\lbrace \varphi_{n} \right\rbrace_{n=1}^{\infty} $ satisfies bounded distortion.
Then we have
$$ {\rm{mdim}}(\mathcal{X},T, \mathcal{F})=\lim\limits_{ \epsilon \rightarrow 0}\left( \lim\limits_{N\rightarrow\infty} \dfrac{{\rm Widim}_{\epsilon}^{'}(\mathcal{X}, d_{N}, \varphi_{N})}{N}\right) .$$ Here ${{ \rm Widim}_{\epsilon}^{'}(\mathcal{X}, d_{N}, \varphi_{N})}$ is subadditive in $N$ and monotone in $\epsilon$. \end{prop} \begin{proof}
Recall that we defined
$${\rm{mdim}}(\mathcal{X},T, \mathcal{F})=\lim\limits_{ \epsilon \rightarrow 0}\left( \lim\limits_{N\rightarrow\infty} \dfrac{{\rm Widim}_{\epsilon}(\mathcal{X}, d_{N}, \varphi_{N})}{N}\right) . $$
From Lemma \ref{ieq1} and bound distortion, we have
$${\rm Widim}_{\epsilon}^{'}(\mathcal{X}, d_{N}, \varphi_{N})\leq {\rm Widim}_{\epsilon}(\mathcal{X}, d_{N}, \varphi_{N})\leq {\rm Widim}_{\epsilon}^{'}(\mathcal{X}, d_{N}, \varphi_{N})+{\rm var}_{\epsilon}(\varphi_{N},d_{N}). $$
By Proposition \ref{ieq1}, we can get the result. \end{proof} \begin{theorem}\label{bound}
Let $(\mathcal{X},T)$ be a dynamical system with a metric $d$, then
$${\rm mdim}_{H}(\mathcal{X},T, d, \mathcal{F})\leq \underline{\rm{mdim}}_{M}(\mathcal{X},T, d, \mathcal{F}).$$ If $\mathcal{F}$ satisfies bounded distortion and there exists $K>0$ such that $|\varphi_{n+1}-\varphi_{n}|<K$ for every $n$, then
$$ {\rm mdim}(\mathcal{X},T, \mathcal{F})\leq {\rm mdim}_{H}(\mathcal{X},T, d, \mathcal{F}).$$ \end{theorem} \begin{proof}
We firstly show that ${\rm mdim}_{H}(\mathcal{X},T, d, \mathcal{F})\leq \underline{\rm{mdim}}_{M}(\mathcal{X},T, d, \mathcal{F}).$ Let $0<\epsilon<1$ and $N>0$. Let $\mathcal{X}=U_{1}\cup\cdots\cup U_{n}$ be an open cover with ${\rm diam}(U_{i}, d_{N})<\epsilon$. For $s\geq \max_{\mathcal{X}} \varphi_{N}$
\begin{align*}
H_{\epsilon}^{s}(\mathcal{X}, d_{N}, \varphi_{N})&\leq \sum\limits_{i=1}^{n}({\rm diam}(U_{i}, d_{N}))^{s-\sup_{U_{i}} \varphi_{N}}\\&\leq\sum_{i=1}^{n}\epsilon^{s-\sup_{U_{i}} \varphi_{N}}=\epsilon^{s}\cdot \sum\limits_{ i=1}^{n}(1/\epsilon)^{\sup_{U_{i}} \varphi_{N}}.
\end{align*}
Hence
$$ H_{\epsilon}^{s}(\mathcal{X}, d_{N}, \varphi_{N})\leq \epsilon^{s} \cdot \#(\mathcal{X}, d_{N}, \varphi_{N}, \epsilon).$$
This implies
$${\rm dim}_{H}(\mathcal{X}, d_{N}, \varphi_{N}, \epsilon)\leq \dfrac{\log \#(\mathcal{X},d_{N}, \varphi_{N}, \epsilon)}{\log(1/\epsilon)}.$$
Divide this by $N$ and take the limits of $N\rightarrow \infty$:
$$\limsup\limits_{N\rightarrow \infty} \dfrac{{\rm dim}_{H}(\mathcal{X}, { d_{N}},\varphi_{N},\epsilon)}{N}\leq \dfrac{P(\mathcal{X}, T, d, \mathcal{F}, \epsilon)}{\log (1/\epsilon)}.$$
Letting $\epsilon \rightarrow 0$, we get ${{\rm mdim}_{H}}(\mathcal{X}, T, d, \mathcal{F})\leq \underline{{\rm mdim_{M}}}(\mathcal{X}, T, d, \mathcal{F}).$ \end{proof} Next we show that mean Hausdorff dimension with sub-additive potentials bounds mean dimension with sub-additive potentials. We need some lemmas. Let $(\mathcal{X}, d)$ be a compact metric space. For $s\geq 0$, we define \begin{align*}
H_{\infty}^{s}(\mathcal{X},d)= \inf\left\lbrace \sum\limits_{ i=1}^{\infty} ({\rm diam} E_{i})^{s}| \mathcal{X}=\bigcup\limits_{ i=1}^{\infty} E_{i} \right\rbrace . \end{align*}
We denote the standard Lebesgue measure on $\mathbb{R}^{N}$ by $\nu_{N}$. We set $\left\| x\right\| =\max\limits_{ 1\leq i \leq N} \left| x_{i}\right| $ for $x\in \mathbb{R}^{N}$. For $A\subset \left\lbrace 1,2,\cdots,N \right\rbrace $ we define $\pi_{A}: [0, 1]^{N} \rightarrow [0,1]^{A}$ as the projection to the A-coordinates. The next Lemma was given in \cite{LT}. \begin{lem}
Let $K\subset [0,1]^{N}$ be a closed subset and $0\leq n \leq N $,
\begin{itemize}
\item $\nu_{N}(K) \leq 2^{N} H_{\infty}^{N}(K, \left\| .\right\| ).$
\item $\nu_{N}(\bigcup\limits_{ |A|\geq n} \pi_{A}^{-1}(\pi_{A}K)) \leq 4^{N} H_{\infty}^{n}(K, \left\|\cdot \right\| ).$
\end{itemize} \end{lem} The following lemma is the key ingredient of the proof of Theorem \ref{bound}. \begin{lem}\label{lema}\cite{MT}
Let $(\mathcal{X}, d)$ be a compact metric space with a continuous function $\varphi: \mathcal{X} \rightarrow \mathbb{R}$. Let $\epsilon>0$, $L>0$ and $s\geq \max_{ \mathcal{X}} \varphi$ be real numbers. Suppose there exists a Lipschitz map $f: \mathcal{X}\rightarrow [0,1]^{N}$ such that
\begin{itemize}
\item$ \left\| f(x)-f(y)\right\| \leq L \cdot d(x,y), $
\item $\left\| f(x)-f(y)\right\|=1$ if $d(x,y)\geq \epsilon.$
\end{itemize}
Moreover, suppose
$$ 4^{N}(L+1)^{1+s+\left\| \varphi\right\|_{\infty} } H_{1}^{s}(\mathcal{X}, d, \varphi)<1,$$
where $\left\| \varphi\right\|_{\infty}=\max_{ \mathcal{X}}\left| \varphi \right|. $ Then
$$ {\rm Widim}_{\epsilon}^{'}(\mathcal{X}, d, \varphi)\leq s+1.$$ \end{lem} \begin{proof}[Proof of Theorem \ref{bound}]
It is sufficient to show that ${\rm mdim}(\mathcal{X}, T, d, \mathcal{F})\leq {\rm mdim}_{H}(\mathcal{X}, T, d, \mathcal{F}).$ Given $\epsilon>0$, we take a Lipschitz map $f: \mathcal{X} \rightarrow [0,1]^{M}$ such that
$$ d(x,y)\geq \epsilon \Rightarrow \left\| f(x)-f(y)\right\| =1. $$
Let $L>0$ be Lipschitz constant of $f$, i.e., $\left\| f(x)- f(y)\right\|\leq L\cdot d(x,y). $ For $N>0$ we define $f_{N}: \mathcal{X} \rightarrow [0,1]^{MN}$ by
$$f_{N}(x)=(f(x), f(Tx), \cdots, f(T^{N-1}x)).$$
Then
\begin{itemize}
\item $\left\| f_{N}(x)-f_{N}(y)\right\|\leq L\cdot d_{N}(x,y), $
\item $\left\|f_{N}(x)-f_{N}(y)=1 \right\| $ if $d_{N}(x,y)\geq \epsilon.$
\end{itemize}
Put $s > {\rm mdim}_{H}(\mathcal{X}, T, d, \mathcal{F})$. Let $\tau>0$ be arbitrary. Take $0<\delta<1$ such that
\begin{align}\label{c}
4^{M}\cdot (L+1)^{1+s+\tau+K+ \left\| \varphi_{1}\right\|_{\infty} }\cdot \delta^{\tau}<1.
\end{align}
Since ${\rm mdim}_{H}(\mathcal{X}, T, d, \mathcal{F})< s$, we can take $0< N_{1} < N_{2}< N_{3}<\cdots\rightarrow \infty$ satisfying
${\rm dim}_{H}(\mathcal{X}, d_{N_{i}}, \varphi_{N_{i}}, \delta)< sN_{i}$. Then $H_{\delta}^{sN_{i}}(\mathcal{X},d_{N_{i}}, \varphi_{N_{i}} )<1$ and hence
$$ H_{\delta}^{(s+\tau)N_{i}}(\mathcal{X}, d_{N_{i}}, \varphi_{N_{i}})\leq \delta^{\tau N_{i}}H_{\delta}^{sN_{i}}(\mathcal{X}, d_{N_{i}}, \varphi_{N_{i}})<\delta^{\tau N_{i}}.$$
By $(\ref{c})$, we can
\begin{align*}
4^{MN_{i}}(L+1)^{1+(s+\tau)N_{i}+\left\| \varphi_{N_{i}}\right\|_{\infty}}H_{1}^{(s+\tau)N_{i}}(\mathcal{X}, d_{N_{i}}, \varphi_{N_{i}})&< \left\lbrace 4^{M}\cdot (L+1)^{1+s+\tau+K+\left\| \varphi_{1} \right\|_{\infty}}\cdot \delta^{\tau} \right\rbrace^{N_{i}}\\&<1.
\end{align*}
According to Lemma \ref{lema}, we can have
$$ {\rm Widim}^{'}_{\epsilon}(\mathcal{X}, d_{N_{i}}, \varphi_{N_{i}} )\leq (s+\tau)N_{i}+1.$$
Hence
$$\lim\limits_{N\rightarrow \infty}\dfrac{{\rm Widim}^{'}_{\epsilon}(\mathcal{X}, d_{N}, \varphi_{N} )}{N}\leq s+\tau. $$
Let $s\rightarrow {\rm mdim}_{H}(\mathcal{X}, T, d, \mathcal{F})$, $\tau \rightarrow 0$ and $ \epsilon\rightarrow 0$:
$$ \lim\limits_{\epsilon\rightarrow 0}\left( \lim\limits_{N\rightarrow \infty} \dfrac{{\rm Widim}^{'}_{\epsilon}(\mathcal{X}, d_{N}, \varphi_{N} )}{N}\right) \leq {\rm mdim}_{H}(\mathcal{X}, T, d, \mathcal{F}).$$
By Proposition \ref{proa}, this proves ${\rm mdim}(\mathcal{X}, T, d, \mathcal{F}) \leq {\rm mdim}_{H}(\mathcal{X},T, d, \mathcal{F}).$
\begin{rem}
The above proof actually shows ${\rm mdim}(\mathcal{X}, T, d, \mathcal{F}) \leq \underline{{\rm mdim}}_{H}(\mathcal{X},T, d, \mathcal{F})$. It is worth pointing out that the bound distortion is used in the proof of Proposition \ref{proa}.
\end{rem} \end{proof} \section{Proof of Theorem \ref{thmc}}\label{b} In this section, we give a proof of Theorem \ref{thmc}. It states that we can construct invariant probability measures capturing dynamical complexity of $(\mathcal{X}, T, d, \mathcal{F})$. We firstly give some notations and lemmas which are needed in our proof of Theorem \ref{thmc}. \begin{defn}
The compact metric space $(\mathcal{X}, d)$ is said to have { tame growth of covering numbers} if for every $\delta >0$ it holds that
$$ \lim\limits_{\epsilon \rightarrow 0} \epsilon^{\delta} \log \# (\mathcal{X}, d, \epsilon)=0.$$ \end{defn} The following result \cite{LT} shows that the tame growth of covering numbers is a fairly mild condition. \begin{lem}
Let $(\mathcal{X},d)$ be a compact metric space. There exists a metric $d'$ on $\mathcal{X}$ (compatible with the topology) such that $d'(x,y) \leq d(x,y)$ and that $(\mathcal{X}, d')$ has the tame growth of covering numbers. In particular every compact metrizable space admits a metric having the tame growth of covering numbers. \end{lem} Let $(\mathcal{X}, T)$ be a dynamical system with a metric $d$. For $N\geq 1$, we introduce the mean metric $\overline{d}_{N}$ on $\mathcal{X}$ as follows: $$ \overline{d}_{N}(x, y)=\dfrac{1}{N}\sum\limits_{n=0}^{N-1}\limits d(T^{n}x, T^{n}y).$$ Let $\varphi: \mathcal{X} \rightarrow \mathbb{R}$ be a continuous function. We define the { $L^{1}$-mean Hausdorff dimension with sub-additive potentials } by $$ {\rm mdim}_{H,L^{1}}(\mathcal{X}, T, d, \mathcal{F})=\lim\limits_{\epsilon\rightarrow 0} \left( \limsup\limits_{N\rightarrow \infty} \dfrac{{\rm dim}_{H}(\mathcal{X}, \overline{d}_{N}, \varphi_{N}, \epsilon)}{N}\right) .$$ Since $\overline{d}_{N}\leq d_{N}$, we always have $$ {\rm mdim}_{H,L^{1}}(\mathcal{X}, T, d, \mathcal{F})\leq {\rm mdim}_{H}(\mathcal{X}, T, d, \mathcal{F}). $$ \begin{lem}\label{le2}
If $(X, d)$ has the tame growth of covering numbers and there exists $K>0$ such that
$|\varphi_{n+1}(x)-\varphi_{n}(x)|\leq K, ~\forall x\in \mathcal{X}~,n\in \mathbb{N}.$ Then
$$ {\rm mdim}_{H,L^{1}}(\mathcal{X}, T, d, \mathcal{F})= {\rm mdim}_{H}(\mathcal{X}, T, d, \mathcal{F}).$$ \end{lem} \begin{proof}
It is enough to prove ${\rm mdim}_{H}(\mathcal{X}, T, d, \mathcal{F})\leq {\rm mdim}_{H,L^{1}}(\mathcal{X}, T, d, \mathcal{F})$. We use the notation $[N]:=\left\lbrace 0,1,2, \cdots, N-1\right\rbrace $ and $d_{A}(x, y):=\max_{a\in A}d(T^{a}x, T^{a}y)$ for $A\subset [N]$.
Let $0<\delta<1/2$ and $s>{\rm mdim}_{H,L^{1}}(\mathcal{X}, T, d, \mathcal{F})$ be arbitrary. For each $\tau>0$ we choose an open cover $\mathcal{X}=W_{1}^{\tau}\cup\cdots W_{M(\tau)}^{\tau}$ with ${\rm diam}(W_{i}^{\tau}, d)< \tau$ and $M(\tau)=\#(\mathcal{X}, d, \tau)$. From the tame growth condition, we can find $0<\epsilon_{0}<1$ such that
\begin{align}
M(\tau)^{\tau^{\delta}}<2~~(\forall ~0<\tau<\epsilon_{0}),
\end{align}
\begin{align}\label{ieq3}
2^{2+\delta+(1+2\delta)(s+K+\left\| \varphi_{1}\right\|_{\infty} )}\cdot \epsilon_{0}^{\delta(1-\delta)}<1.
\end{align}
Let $0<\epsilon<\epsilon_{0}$ be a sufficiently small number, and let $N$ be a sufficiently large natural number. Since ${\rm mdim}_{H,L^{1}}(\mathcal{X}, T, d, \mathcal{F})<s$, there exists a covering $\mathcal{X}=\bigcup\limits_{n=1}^{\infty}E_{n}$ with $\tau_{n}:={\rm diam}(E_{n}, \overline{d}_{N})<\epsilon$ satisfying \begin{align}\label{ieq2}
\sum\limits_{ i=1}^{\infty} \tau_{n}^{sN-\sup_{E_n} \varphi_{N}}<1, ~~(sN \geq \max\limits_{\mathcal{X}} \varphi_{N}).
\end{align}
Set $L_{n}=(1/\tau_{n})^{\delta}$ and pick a point $x_{n} \in E_{n}$ for each $n$. Then every $x\in E_{n}$ satisfies $\overline{d}_{N}(x, x_{n})<\tau_{n}$ and hence
$$ \left| \left\lbrace k\in[N]| d(T^{k}x, T^{k}y)\geq L_{n}\tau_{n} \right\rbrace \right| \leq \dfrac{N}{L_{n}}. $$
So there exists $A\subset [N]$ (depending on $x\in E_{n}$) such that $\left| A \right| \leq N/L_{n} $ and $d_{[N]\setminus A}(x, x_{n})< L_{n}\tau_{n}.$ Thus
$$E_{n} \subset \bigcup\limits_{ A\subset [N], \left| A \right|\leq N/L_{n} } B_{L_{n}\tau_{n}}^{\circ}(x_{n}, d_{[N]\setminus A}),$$ where $B_{L_{n}\tau_{n}}(x_{n}, d_{[N]\setminus A})$ is the open ball of radius $L_{n} \tau_{n}$ around $x_{n}$ with respect to the metric $d_{[N]\setminus A}$.
Let $A=\left\lbrace a_{1}, \cdots, a_{r} \right\rbrace $. We consider a decomposition
$$B_{L_{n}\tau_{n}}^{\circ}(x_{n}, d_{[N]\ A})=\bigcup\limits_{1\leq i_{1}, \cdots, i_{r}\leq M(\tau_{n})} B_{L_{n}\tau_{n}}^{\circ}(x_{n}, d_{[N]\setminus A})\cap T^{-a_{1}} W_{i_{1}}^{\tau_{n}} \cap \cdots \cap T^{-a_{r}} W_{i_{r}}^{\tau_{n}}. $$
Then $\mathcal{X}$ is covered by the sets
\begin{align}\label{eq3}
E_{n}\cap B_{L_{n}\tau_{n}}^{\circ} (x_{n}, d_{N \setminus A}) \cap T^{-a_{1}} W_{i_{1}}^{\tau_{n}}\cap \cdots \cap T^{-a_{r}} W_{i_{r}}^{\tau_{n}},
\end{align}
where $n\geq 1, A=\left\lbrace a_{1}, \cdots, a_{r} \right\rbrace \subset [N] $ with $r\leq N/ L_{n}$ and $1\leq i_{1},\cdots, i_{r}\leq M(\tau_{n})$. The sets $(\ref{eq3})$ have diameter less than or equal to $2L_{n}\tau_{n}=2\tau_{n}^{1-\delta}<2\epsilon^{1-\delta}$ with respect to the metric $d_{N}$.
Set $m_{N}=\min\limits_{\mathcal{X}} \varphi_{N}$. We estimate the quantity
$$ H_{2\epsilon^{1-\delta}}^{sN+2\delta(sN- m_{N})+\delta N}(\mathcal{X}, d_{N}, \varphi_{N}).$$
This is bounded by
$$ \sum\limits_{n=1}^{\infty} 2^{N}\cdot M(\tau_{n})^{N/ L_{n}}\cdot (2\tau_{n}^{1-\delta})^{sN+2\delta(sN- m_{N})+\delta N-\sup_{E_n} \varphi_{N}}.$$
The factor $2^{N}$ comes from the choice of $A\subset [N]$. Since $\tau_{n} < \epsilon< \epsilon_{0}$
\begin{align*}
(2 \tau_{n}^{1-\delta})^{s N+2\delta(sN- m_{N})+\delta N-\sup_{E_n} \varphi_{N}}&=(2 \tau_{n}^{1-\delta})^{sN+2\delta(sN-m_{N})-\sup_{E_n} \varphi_{N}}\cdot(2\tau_{n}^{1-\delta})^{\delta N}\\&\leq(2 \tau_{n}^{1-\delta})^{sN+2\delta(sN-m_{N})-\sup_{E_n} \varphi_{N}}\cdot (2^{\delta}\epsilon_{0}^{\delta(1-\delta)})^{N}.
\end{align*}
The term $(2 \tau_{n}^{1-\delta})^{sN+2\delta(sN-m_{N})-\sup_{E_n} \varphi_{N}}$ is equal to
$$\underbrace{2^{sN+2\delta(sN-m_{N})-\sup_{E_{n}} \varphi_{N}}}_{I}\cdot\underbrace{\tau_{n}^{2\delta(sN- m_{N})-\delta\left\lbrace sN+2\delta(sN- m_{N}) -\sup_{E_{n}}\varphi_{N}\right\rbrace}}_{II}\cdot\tau_{n}^{sN-\sup_{E_{n}}\varphi_{N}}.$$
The factor $(I)$ is bounded by
$$ 2^{sN+2\delta(sN+KN+\left\| \varphi_{1} \right\|_{\infty}N )+KN+\left\| \varphi_{1}\right\|_{\infty}N }=2^{(1+2\delta)(s+\left\| \varphi_{1} \right\|_{\infty}+K )N}.$$
The exponent of the factor $(II)$ is bounded from below (note $0<\tau_{n}<1$) by
$$ 2\delta (sN- m_{N})-\delta\left\lbrace sN+2\delta(sN- m_{N}) -m_{N}\right\rbrace=\delta(1-2\delta)(sN- m_{N})\geq 0.$$
Here we have used $sN\geq \max_{ \mathcal{X}}\varphi_{N}\geq m_{N}$. Hence the factor $(II)$ is less than or equal to 1. Summing up the above estimates, we get
$$ (2 \tau_{n}^{1-\delta})^{s N+2\delta(sN- m_{N})+\delta N-\sup_{E_n} \varphi_{N}}\leq 2^{(1+2\delta)(s+K+\left\| \varphi_{1} \right\|_{\infty} )N}\cdot(2^{\delta}\epsilon_{0}^{\delta(1-\delta)})^{N}\cdot\tau_{n}^{sN-\sup_{E_{n}} \varphi_{N}}.$$
Thus
\begin{align*}
&H_{2\epsilon^{1-\delta}}^{sN+2\delta(sN- m_{N})+\delta N}(\mathcal{X}, d_{N}, \varphi_{N}) \\&\leq \sum\limits_{n=1}^{\infty}\left\lbrace 2^{1+(1+2\delta)(s+\left\| \varphi_{1} \right\|_{\infty} )}\cdot M(\tau_{n})^{1/ L_{n}}\cdot (2^{\delta}\epsilon_{0}^{\delta(1-\delta)}) \right\rbrace^{N}\cdot\tau_{n}^{sN-\sup_{E_{n}} \varphi_{N}}
\\&\leq \sum\limits_{n=1}^{\infty}\left\lbrace 2^{2+\delta +(1+2\delta)(s+\left\| \varphi_{1} \right\|_{\infty} )}\cdot (\epsilon_{0}^{\delta(1-\delta)}) \right\rbrace^{N}\cdot\tau_{n}^{sN-\sup_{E_{n}} \varphi_{N}}
\\&\leq \sum\limits_{n=1}^{\infty}\tau_{n}^{sN-\sup_{E_{n}} \varphi_{N}}~~(by~(\ref{ieq3}))\\&<1~~({by}~(\ref{ieq2}))
\end{align*}
Therefore
\begin{align*}
{\rm dim}_{H}(\mathcal{X},d_{N}, \varphi_{N}, 2\epsilon^{1-\delta})&\leq sN+2\delta(sN- m_{N})+\delta N\\&\leq sN+2\delta(sN+KN+\left\| \varphi_{1} \right\|_{\infty}N )+\delta N.
\end{align*}
Divide this by $N$. Let $N\rightarrow \infty$ and $\epsilon\rightarrow 0$:
$${\rm mdim}_{H}(\mathcal{X}, T, d, \mathcal{F})\leq s+2\delta(s+K+\left\| \varphi_{1} \right\|_{\infty} )+\delta.$$
Let $\delta\rightarrow 0$ and $s\rightarrow {\rm mdim}_{H, L^{1}}(\mathcal{X}, T, d, \mathcal{F}):$
$$ {\rm mdim}_{H}(\mathcal{X}, T, d, \mathcal{F})\leq {\rm mdim}_{H, L^{1}}(\mathcal{X}, T, d, \mathcal{F}).$$ \end{proof} Let $(X, d)$ be a compact metric space. For $\epsilon>0$ and $s\geq 0$ we set $H_{\epsilon}^{s}(\mathcal{X}, d)=H_{\epsilon}^{s}(\mathcal{X},d,0).$ Namely
$$ H_{\epsilon}^{s}(\mathcal{X},d)=\inf\left\lbrace \sum\limits_{i=1}^{\infty}({\rm diam}E_{i})^{s}| \mathcal{X}=\bigcup\limits_{i=1}^{\infty} E_{i}~ \text{with}~ {\rm diam} E_{i}< \epsilon~\forall ~i\geq 1\right\rbrace. $$ We define ${\rm dim}_{H}(\mathcal{X}, d, \epsilon)$ as the supremum of $s\geq 0$ satisfying $H_{\epsilon}^{s}(\mathcal{X}, d)\geq 1$. \begin{lem}\cite{LT}\label{lem3}
Let $0<c<1$. There exists $0<\delta_{0}<1$ depending only on $c$ and satisfying the following statement. For any compact metric space $(\mathcal{X}, d)$ and $
0<\delta<\delta_{0}(c)$ there exists a Borel probability measure $\nu$ on $\mathcal{X}$ such that
$$ \nu(E)\leq ({\rm diam}E)^{c\cdot dim_{H}(\mathcal{X}, d, \delta)} ~~\text{for all} ~E\subset \mathcal{X}~with ~{\rm diam}E<\dfrac{\delta}{6}. $$ \end{lem} \begin{lem}\cite{Cao}\label{lem2}
Suppose $\left\lbrace \nu_{n} \right\rbrace_{n}^{\infty} $ is a sequence in $\mathcal{M}(\mathcal{X})$, where $\mathcal{M}(\mathcal{X})$ denotes the space of all Borel probability measures on $\mathcal{X}$ with the $weak^{*}$ topology. We form the new sequence $\left\lbrace \mu_{n}\right\rbrace_{n=1}^{\infty} $ by $\mu_{n}=\dfrac{1}{n}\sum_{i=0}^{n-1}\nu_{n}\circ T^{-i}.$ Assume that $\mu_{n_{i}}$ converges to $\mu$ in $\mathcal{X}$ for some subsequence $\left\lbrace n_{i} \right\rbrace $ of natural numbers. Then $\mu\in \mathcal{M}(\mathcal{X}, T)$, and moreover
$$ \limsup\limits_{i\rightarrow \infty}\dfrac{1}{n_{i}}\int \log f_{n_{i}} d \nu_{n_{i}}\leq \mathcal{F}_{*}(\mu).$$ \end{lem} \begin{lem}\label{lem}
Let $A$ be a finite set. Suppose that probability measures $\mu_{n}$ on $A$ converge to some $\mu$ in the $weak^{*}$ topology. Then there exist probability measures $\pi_{n} (n\geq 1)$ on $A\times A$ such that
\begin{itemize}
\item $\pi_{n}$ is a coupling between $\mu_{n}$ and $\mu$. Namely the first and second marginals of $\pi_{n}$ are given by $\mu_{n}$ and $\mu$ respectively.
\item $\pi_{n}$ converge to $(id\times id)_{*}\mu$ in the $weak^{*}$ topology. Namely
\begin{equation*}
\pi_{n}(a, b)\rightarrow
\begin{cases}0, ~~~~~\text{if}~ (a\neq b), \\[5pt] \mu(a), ~~~\text{if}~ (a=b).\\
\end{cases}
\end{equation*}
\end{itemize} \end{lem} \begin{theorem}\label{thmc}
Assume that $\overline{{\rm mdim_{M}}}(\mathcal{X}, T, d)<\infty$ for all $d\in \mathcal{D}(X)$ and there exists $K>0$ such that
$|\varphi_{n+1}(x)-\varphi_{n}(x)|\leq K, ~\forall x\in \mathcal{X}~,n\in \mathbb{N}.$ Under a mild condition on $d$ (called tame growth of covering numbers)
$$ {\rm mdim}_{H}(\mathcal{X}, T, d, \mathcal{F}) \leq \sup\limits_{\mu \in \mathcal{M}(\mathcal{X}, T)}(\underline{{\rm rdim}}(\mathcal{X}, T, d, \mu)+ \mathcal{F}_{*}(\mu)).$$ \end{theorem} The Theorem \ref{thmc} follows from Lemma \ref{le2} and Theorem \ref{ss}. \begin{theorem}\label{ss}
Assume that $\overline{{\rm mdim_{M}}}(\mathcal{X}, T, d)<\infty$ for all $d\in \mathcal{D}(X)$ and there exists $K>0$ such that
$|\varphi_{n+1}(x)-\varphi_{n}(x)|\leq K, ~\forall x\in \mathcal{X}~,n\in \mathbb{N}$. For any dynamical system $(\mathcal{X}, d)$ with metric $d$, then
$${\rm mdim}_{H, L^{1}}(\mathcal{X}, T, d, \mathcal{F}) \leq \sup\limits_{\mu \in \mathcal{M}(\mathcal{X}, T)}(\underline{{\rm rdim}}(\mathcal{X}, T, d, \mu)+ \mathcal{F}_{*}(\mu)).$$ \end{theorem} \begin{proof}[Proof of Theorem \ref{ss}]
We extend the definition of $\overline{d}_{n}$. For $x=(x_{0}, x_{1}, \cdots, x_{n-1})$ and $y=(y_{0}, y_{1}, \cdots, y_{n-1})$ in $\mathcal{X}^{n}$, we set $$\overline{d}_{n}(x, y)=\dfrac{1}{n}\sum\limits_{i=0}^{n-1}d(x_{i}, y_{i}).$$
Let $0<c<1$ and $s<{\rm mdim}_{H, L^{1}}(\mathcal{X}, T, d, \mathcal{F})$ be arbitrary. Then there exists an invariant probability measure $\mu$ on $\mathcal{X}$ such that
\begin{align}\label{eq4}
\underline{{\rm rdim}}(\mathcal{X}, T, d, \mu)+\mathcal{F}_{*}(\mu)\geq cs-(1-c)\left\| \varphi_{1}\right\|_{\infty}.
\end{align}
Take $\eta>0$ satisfying ${\rm mdim}_{H, L^{1}}(\mathcal{X}, T, d, \mathcal{F})-2\eta >s.$ Let $\delta_{0}=\delta_{0}(c)\in (0,1)$ be a constant given by Lemma \ref{lem3}. There exist $0<\delta <\delta_{0}$ and a sequence $n_{1}<n_{2}<n_{3}<\cdots\rightarrow \infty$ satisfying
$$ {\rm dim}_{H}(\mathcal{X}, \overline{d}_{n_{k}}, \varphi_{n_{k}}, \delta) >(s+2\eta)n_{k}.$$ \begin{claim}
There exists $t\in \left[-\left\| \varphi_{1} \right\|_{\infty}-K, \left\| \varphi_{1} \right\|_{\infty}+K \right] $ such that for infinitely many $n_{k}$
$$ {\rm dim}_{H}\left( (\dfrac{ \varphi_{n_{k}}}{n_{k}})^{-1}[t, t+\eta], \overline{d}_{n_{k}}, \delta\right) \geq (s-t)n_{k}.$$ \end{claim} \begin{proof}
Since ${\rm dim}_{H}(\mathcal{X}, \overline{d}_{n_{k}}, \varphi_{n_{k}}, \delta) >(s+2\eta)n_{k}$, we have $$H_{\delta}^{(s+2\eta)n_{k}}(\mathcal{X}, \overline{d}_{n_{k}}, \varphi_{n_{k}})\geq 1.$$ Set $m=[\dfrac{2\left\| \varphi_{1}\right\|_{\infty}+2K }{\eta}]$ and consider a decomposition of $\mathcal{X}$, namely,
$$\mathcal{X}=\bigcup\limits_{l=0}^{m-1}(\dfrac{ \varphi_{n_{k}}}{n_{k}})^{-1}[l\eta, (l+1)\eta].$$
Then there exists $t\in \left\lbrace -\left\| \varphi_{1} \right\|_{\infty}-K+l\eta| l=0, 1, \cdots, m-1 \right\rbrace $ such that for infinitely many $n_{k}$
$$ H_{\delta}^{(s+2\eta)n_{k}}\left( (\dfrac{ \varphi_{n_{k}}}{n_{k}} )^{-1}[t, t+\eta], \overline{d}_{n_{k}}, \varphi_{n_{k}}\right)\geq \dfrac{1}{m}.$$
Since $(s+2\eta)n_{k}- \varphi_{n_{k}}\geq (s+2\eta)n_{k}-(t+\eta)n_{k}=(s-t)n_{k}+\eta n_{k}$ on the set $( \varphi_{n_{k}}/n_{k} )^{-1}[t, t+\eta],$
\begin{align*}
H_{\delta}^{(s+2\eta)n_{k}}((\dfrac{ \varphi_{n_{k}}}{n_{k}})^{-1}[t, t+\eta], \overline{d}_{n_{k}}, \varphi_{n_{k}})&\leq H_{\delta}^{(s-t)n_{k}+\eta n_{k}}((\dfrac{ \varphi_{n_{k}}}{n_{k}})^{-1}[t, t+\eta], \overline{d}_{n_{k}})\\&\leq \delta^{\eta n_{k}}\cdot H_{\delta}^{(s-t)n_{k}}((\dfrac{ \varphi_{n_{k}}}{n_{k}})^{-1}[t, t+\eta], \overline{d}_{n_{k}}).
\end{align*}
Hence for infinitely many $n_{k}$,
$$ H_{\delta}^{(s-t)n_{k}}\left( (\dfrac{ \varphi_{n_{k}}}{n_{k}})^{-1}[t, t+\eta], \overline{d}_{n_{k}}\right) \geq \dfrac{\delta^{-\eta n_{k}}}{m}.$$
The right-hand side is large than one for sufficiently large $n_{k}$. Then for such $n_{k}$
$${\rm dim}_{H}\left( (\dfrac{ \varphi_{n_{k}}}{n_{k}})^{-1}[t, t+\eta], \overline{d}_{n_{k}}, \delta\right) \geq (s-t)n_{k}.$$ \end{proof} By choosing a subsequence of $n_{k}$ (also denoted by $\left\lbrace n_{k} \right\rbrace $), we assume that the condition $$ {\rm dim}_{H}\left( (\dfrac{ \varphi_{n_{k}}}{n_{k}})^{-1}[t, t+\eta]\right) , \overline{d}_{n_{k}}, \delta)\geq (s-t)n_{k}$$ holds for all $n_{k}$. Noting that $0<\delta<\delta_{0}(c)$, we apply Lemma \ref{lem3} to the subspace $(\dfrac{ \varphi_{n_{k}}}{n_{k}})^{-1}[t, t+\eta]\subset \mathcal{X}$. Then we can find a Borel probability measure $\nu_{k}$ supported on $(\dfrac{ \varphi_{n_{k}}}{n_{k}})^{-1}[t, t+\eta]$ such that \begin{align}\label{eq7} \nu_{k}(E)\leq ({\rm diam}(E, \overline{d}_{n_{k}}))^{c(s-t)n_{k}}~~\text{for all}~E\subset \mathcal{X}~\text{with} ~~{\rm diam}(E, \overline{d}_{n_{k}})<\dfrac{\delta}{6}. \end{align} Notice that $\nu_{k}$ is not necessarily invariant under $T$. Set $$ \mu_{k}=\dfrac{1}{n_{k}}\sum\limits_{n=0}^{n_{k}-1}T_{*}^{n}\nu_{k}.$$ By choosing a subsequence (also denoted by $\left\lbrace n_{k} \right\rbrace $ again) we can assume that $\mu_{k}$ converges to some $\mu \in \mathcal{M}(\mathcal{X}, T)$ in the $weak^{*}$ topology. By Lemma \ref{lem2} $$\limsup\limits_{k\rightarrow \infty}\dfrac{1}{n_{k}} \int_{\mathcal{X}} \varphi_{n_{k}}d\nu_{k}\leq \mathcal{F}_{*}(\mu)=\lim\limits_{k\rightarrow \infty}\dfrac{1}{n_{k}} \int_{\mathcal{X}} \varphi_{n_{k}} d \mu.$$ On the other hand $$\int_{\mathcal{X}} \dfrac{ \varphi_{n_{k}}}{n_{k}} d \nu_{k} \geq t $$ since $\nu_{k}$ is supported on the set $(\dfrac{ \varphi_{n_{k}}}{n_{k}})^{-1}[t, t+\eta]$. Hence $ \mathcal{F}_{*}(\mu)\geq t.$ Moreover, since $0\leq\overline{\rm{rdim} }(\mathcal{X}, T, d, \mu)<\infty$. Then we need to prove \begin{align}\label{eq5} \underline{\rm rdim}(\mathcal{X}, T, d, \mu )\geq c(s-t). \end{align}
If the above inequality holds, we will get (\ref{eq4}) (recall $\left| t \right|\leq \left\| \varphi_{1} \right\|_{\infty}+K $):
$$\underline{{\rm rdim}}(\mathcal{X}, T, d, \mu)+\mathcal{F}_{*}(\mu)\geq c(s-t)+t=cs+(1-c)t \geq cs-(1-c)(\left\| \varphi_{1} \right\|_{\infty}+K).$$ So the rest of the problem is to prove $(\ref{eq5}) .$ This part of the proof is the same as \cite{LT}. The method is a " rate distortion theory version" of Misiurewicz's technique \cite{Mis76} (a famous proof of the standard variational principle) first developed in \cite{LT18}. The paper \cite{LT} explained more background ideas behind the proof, which we do not repeat here.
Let $\epsilon$ be an arbitrary positive number with $2 \epsilon \log (1/\epsilon)\leq \delta/ 10. $ We will show a lower bound on the rate distortion function of the form $$ R(d, \mu, \epsilon)\geq c(s-t)\log (1/\epsilon)+ \text{small error terms.}$$ Let $X$ and $Y=(Y_{0}, Y_{1}, \cdots, Y_{m-1})$ be random variables defined on a probability space $(\Omega, \mathbb{P})$ such that $X, Y_{0}, \cdots, Y_{m-1}$ take values in $\mathcal{X}$ and satisfy $$ {\rm Law}(X)=\mu, ~~\mathbb{E}\left( \dfrac{1}{m}\sum\limits_{j=0}^{m-1}d (T^{j}X, Y_{j})\right) <\epsilon.$$ We would like to establish a lower bound on the mutual information $I(X; Y)$. For this purpose, we can assume that $Y$ takes only finitely many values. Let $\mathcal{Y} \subset \mathcal{X}^{m}$ be the (finite) set of possible values of $Y$.
We choose $\tau >0$ satisfying \begin{align}\label{eqa} \tau \leq \min(\dfrac{\epsilon}{3}, \dfrac{\delta}{20}),~~\dfrac{\tau}{2}+\mathbb{E}\left( \dfrac{1}{m}\sum_{j=0}^{m-1}d(T^{j}X, Y_{j})\right) <\epsilon. \end{align} We take a measurable partition $\mathcal{P}=\left\lbrace P_{1}, \cdots, P_{L} \right\rbrace $ of $\mathcal{X}$ such that for all $1\leq l \leq L$ $$ {\rm diam}(P_{l}, d)< \dfrac{\tau}{2},~~\mu(\partial P_{l})=0.$$ We choose a point $p_{k} \in P_{k}$ for each $1\leq k \leq K$. Set $A= \left\lbrace p_{1}, \cdots, p_{K} \right\rbrace $. We define a map $\mathcal{P}: \mathcal{X} \rightarrow A$ by $\mathcal{P}(x)=p_{k}$ for $x \in P_{k}$. It follows that \begin{align}\label{eq6} d(x, \mathcal{P}(x))< \epsilon. \end{align} For $n \geq 1$, we set $\mathcal{P}^{n}(x)= (\mathcal{P}(x), \mathcal{P}(T(x)), \cdots, \mathcal{P}(T^{n-1}x)).$ \begin{claim}\label{cla}
The pushforward measure $\mathcal{P}_{*}^{n_{k}}\nu_{k}$ satisfies
$$ \mathcal{P}_{*}^{n_{k}}\nu_{k}(E)\leq (\tau + {\rm diam}(E, \overline{d}_{n_{k}}))^{c(s-t)n_{k}}~~\text{for all}~E\subset A^{n_{k}}~\text{with}~{\rm diam}(E, \overline{d}_{n_{k}})<\dfrac{\delta}{10}.$$ \end{claim} \begin{proof}
From ${\rm diam}(P_{l}, d)<\tau/2$ and $\tau \leq \delta/20,$ if ${\rm diam}(E, \overline{d}_{n_{k}})<\delta /10$ then
$${\rm diam}((\mathcal{P}_{n_{k}})^{-1}E, \overline{d}_{n_{k}})<\tau+{\rm diam}(E, \overline{d}_{n_{k}})<\dfrac{\delta}{6}.$$
By $(\ref{eq7})$, the measure $\mathcal{P}_{*}^{n_{k}}(E)=\nu_{k}((\mathcal{P}^{n_{k}})^{-1} E)$ is bounded by
$$ ({\rm diam}((\mathcal{P}^{n_{k}})^{-1} E), \overline{d}_{n_{k}})^{c(s-t)n_{k}}< (\tau + {\rm diam}(E, \overline{d}_{n_{k}}))^{c(s-t)n_{k}}.$$ \end{proof} From $\mu_{k}\rightarrow \mu$ and $\mu(\partial P_{l})=0,$ we have $\mathcal{P}_{*}^{m}\mu_{k}\rightarrow \mathcal{P}_{*}^{m}\mu$. By Lemma {\ref{lem}}, there exists a coupling $\pi_{k}$ between $\mathcal{P}_{*}^{m}\mu_{k}$ and $\mathcal{P}_{*}^{m}\mu$ such that $\pi_{k}\rightarrow (id \times id)_{*}\mathcal{P}_{*}^{m}\mu.$ Let $X(k)$ be a random variable couple to $\mathcal{P}^{m}(X)$ such that it takes values in $A^{m}$ and Law $(X(k), \mathcal{P}^{m}(X))=\pi_{k}.$ In particular, ${\rm Law}X(k)=\mathcal{P}_{*}^{m}\mu_{k}.$ From $\pi_{k} \rightarrow (id \times id)_{*}\mathcal{P}_{*}^{m}\mu,$ $$\mathbb{E}\overline{d}_{m}(X(k), \mathcal{P}^{m}(X))\rightarrow 0.$$ The random variables $X(k)$ and $Y$ are coupled by the probability mass function
$$ \sum \limits_{x'\in A^{m}}\pi_{k}(x, x')\mathbb{P}(Y=y|\mathcal{P}^{m}(X)=x')~~(x\in A^{m}, y \in \mathcal{Y}),$$ which converges to $\mathbb{P}(\mathcal{P}^{m}(X)=x, Y=y).$ Then by Lemma \ref{le1}, \begin{align}\label{lim} I(X(k); Y)\rightarrow I(\mathcal{P}^{m}(X); Y). \end{align} By the triangle inequality \begin{align*} \overline{d}_{m}(X(k), Y)\leq& \overline{d}_{m}(X(k), \mathcal{P}^{m}(X))+ \overline{d}_{m}(\mathcal{P}^{m}(X), (X, TX, \cdots, T^{m-1}X))\\&+\overline{d}_{m}((X, TX, \cdots, T^{m-1}X), Y) \end{align*} We have $\mathbb{E}\overline{d}_{m}(X(k), \mathcal{P}^{m}(X))\rightarrow 0$, ${\rm diam}(P_{l}, d)<\tau/ 2$ for all $1\leq l \leq L$ and $\tau /2+ \mathbb{E} \overline{d}_{m}((X, TX, \cdots, T^{m-1}X), Y)<\epsilon$ in (\ref {eqa}). Then \begin{align}\label{ieq} \mathbb{E} \overline{d}_{m}(X(k), Y)<\epsilon ~~\text{for sufficiently large}~ k \end{align}
Let $n_{k}=qm+r$ with $ m\leq r \leq 2m-1.$ Fix a point $a\in \mathcal{X}$. We denote by $\delta_{a}(\cdot)$ the delta probability measure at $a$ on $\mathcal{X}$. For $x\in (x_{0}, \cdots, x_{n-1}) \in \mathcal{X}^{n}$, we let $x_{k}^{l}$ denote the $(l-k+1)$-tuple $x_{k}^{l}=(x_{k}, \cdots, x_{l})$ for $0\leq k \leq l <n$. We consider a conditional probability mass function
$$ \rho_{k}(y|x)=\mathbb{P}(Y=y| X(k)=x)$$ for $x, y \in \mathcal{X}^{m}$ with $\mathbb{P}(X(k)=x)=\mathcal{P}_{*}^{m}\mu_{k}(x)>0.$ We define probability mass functions $\sigma_{k,0}(\cdot| x), \cdots, \sigma_{k,m-1}(\cdot| x)$ on $\mathcal{X}^{n}$ by \begin{align}\label{3.9}
\sigma_{k,j}=\prod_{j=0}^{q-1} \rho_{k}(y_{j+im}^{j+im+m-1}| x_{j+im}^{j+im+m-1})\times \prod_{n \in [0, j)\cup [mq+j, n_{k}]}\delta_{a}(y_{n_{k}}). \end{align} We set \begin{align}\label{eq}
\sigma_{k}(y|x)=\dfrac{\sigma_{k,0}(y|x)+\sigma_{k,1}(y|x)+ \cdots + \sigma_{k, m-1}(y|x)}{m}. \end{align} Let $X'(k)$ be a random variable taking values in $\mathcal{X}$ with Law$X'(k)=\nu_{k}$. Set $Z(k)=\mathcal{P}^{n_{k}}(X'(k)).$ We define a random variable $W(k)$ taking values in $\mathcal{X}^{n_{k}}$ and coupled to $Z(k)$ by the condition
$$\mathbb{P}(W(k)=y| Z(k)=x)=\sigma_{k}(y|x).$$ For $0\leq j<m$ we also define $W(k, j)$ by
$$ \mathbb{P}(W(k, j)=y| Z(k)=x)=\sigma_{k,j}(y|x).$$ \begin{claim}\label{2}
$\dfrac{1}{m} I(X_{k}; Y) \geq \dfrac{1}{n} I(Z(k); Y).$ \end{claim}
\begin{proof}
The mutual information is a convex function of conditional probability measure (Lemma \ref{lemc}). Hence
$$ I(Z(k); W(k))\leq \dfrac{1}{m}\sum\limits_{j=0}^{m-1}I( Z(k); W(k, j)).$$
By the subadditivity under conditional independence (Lemma \ref{lems}),
$$ I(Z(k); W(k, j))\leq \sum\limits_{i=0}^{q-1}I(Z(k); W(k, j)_{j+im}^{j+im+m-1}).$$
The term $I(Z(k); W(k, j)_{j+im}^{j+im+m-1})$ is equal to
$$ I(\mathcal{P}^{m}(T^{j+im}X'(k); W(k,j)_{j+im}^{j+im+m-1}=I(\mathcal{P}_{*}^{m}T^{j+im}\nu_{k}, \rho_{k}).$$
Therefore
\begin{align*}
\dfrac{m}{n_{k}}I(Z(k), W(k))&\leq \dfrac{1}{n_{k}}\sum\limits_{\substack{0\leq j<m\\ 0\leq i< q}}I(\mathcal{P}_{*}^{m}T^{j+im}\nu_{k}, \rho_{k})\\&\leq \dfrac{1}{n_{k}}\sum\limits_{n=0}^{n_{k}-1}I(\mathcal{P}_{*}^{m}T^{n}\nu_{k}, \rho_{k})\\&\leq I(\dfrac{1}{n_{k}}\sum\limits_{n=0}^{n_{k}-1}\mathcal{P}_{*}^{m}T^{n}\nu_{k}, \rho_{k}) ~\text{by the concavity in Lemma \ref{lemc}}\\&= I(\mathcal{P}_{*}^{m}\mu_{k}, \rho_{k})~\text{by}~\mu_{k}=\dfrac{1}{n_{k}}\sum\limits_{n=0}^{n_{k}-1}T_{*}^{n}\nu_{k}\\&=I(X(k); Y).
\end{align*} \end{proof}
\begin{claim}\label{cla1}
For sufficiently large $k$
$$\mathbb{E}(\overline{d}_{n_{k}}(Z(k), W(k)))<\epsilon. $$ \end{claim} \begin{proof}
By (\ref{eq}), we have $$\mathbb{E}(\overline{d}_{n}(Z(k), W(k))=\dfrac{1}{m}\sum\limits_{ j=0}^{m-1}\mathbb{E}(\overline{d}_{n_{k}}(Z(k), W(k,j)). $$
From, $Z(k)=\mathcal{P}^{n_{k}}(X'(k), W(k, j)_{j+im}^{j+im+m-1})$, the distance $\overline{d}_{n_{k}}(Z(k), W(k, j))$ is bounded by
$$ \dfrac{r\cdot {\rm diam}(\mathcal{X}, d)}{n_{k}}+\dfrac{m}{n_{k}}\sum\limits_{i=0}^{q-1}\overline{d}_{m}(\mathcal{P}^{m}(T^{j+im} X'(k), W(k, j)_{j+im}^{j+im+m-1})).$$
$\mathbb{E}\overline{d}_{m}(\mathcal{P}^{m}(T^{j+im} X'(k), W(k, j)_{j+im}^{j+im+m-1})$ is equal to
$$\sum\limits_{x, y \in \mathcal{X}^{m}}\overline{d}_{m}(x, y)\rho_{k}(y|x)\mathcal{P}^{m}T^{j+im} \nu_{k}(x).$$
Therefore
\begin{align*}
\mathbb{E}(\overline{d}_{n}(Z(k), W(k)) &
\leq \dfrac{r\cdot {\rm diam}(\mathcal{X}, d)}{n_{k}}+\sum\limits_{x, y \in \mathcal{X}^{m}}\overline{d}_{m}(x, y)\rho_{k}(y|x)\left( \dfrac{1}{n_{k}}\sum\limits_{\substack{0\leq j<m\\ 0\leq i< q}}\mathcal{P}_{*}^{m} T_{*}^{j+im}\nu_{k}(x) \right) \\& =\dfrac{r\cdot {\rm diam}(\mathcal{X}, d)}{n_{k}}+\sum\limits_{x, y \in \mathcal{X}^{m}}\overline{d}_{m}(x, y)\rho_{k}(y|x)\left( \dfrac{1}{n_{k}}\sum\limits_{n=0}^{n_{k}-1} \mathcal{P}_{*}^{m} T_{*}^{n}\nu_{k}(x)\right) \\& = \dfrac{r\cdot {\rm diam}(\mathcal{X}, d)}{n_{k}}+\sum\limits_{x, y \in \mathcal{X}^{m}}\overline{d}_{m}(x, y)\rho_{k}(y|x) \mathcal{P}_{*}^{m}\mu_{k}(x) \\&=\dfrac{r\cdot {\rm diam}(\mathcal{X}, d)}{n_{k}}+\mathbb{E}\overline{d}_{m}(X(k), Y).
\end{align*}
From $r\geq 2m$ and (\ref{ieq}), this is less than $\epsilon$ for large $k$. \end{proof} Recall $2\epsilon \log (1/\epsilon)\leq \delta/10$ and $\tau\leq \min(\epsilon/3, \delta/ 20).$ The measure Law $Z(k)=\mathcal{P}_{*}^{n_{k}}\nu_{k}$ satisfies the "scaling law" given by Claim \ref{cla}. Then we apply Lemma \ref{ml2} to $(Z(k), W(k))$ with Claim \ref{cla1}, which provides \begin{align} I(Z(k); W(k))\geq c(s-t)n_{k}\log (1/\epsilon)-T(c(s-t)n_{k}+1) ~~\text{for large}~k. \end{align} Here $T$ is a universal positive constant. From Claim \ref{2}, $$\dfrac{1}{m}I(X(k); Y)\geq c(s-t) \log (1/\epsilon)-T(c(s-t)+\dfrac{1}{n_{k}}).$$ We know $I(X(k); Y)\rightarrow I(\mathcal{P}^{m}(X); Y)$ as $k\rightarrow \infty$ in (\ref{lim}). Hence $$ \dfrac{1}{m}I(\mathcal{P}^{m}(X); Y)\geq c(s-t) \log (1/\epsilon)-cT(s-t).$$ By the date-processing inequality (Lemma \ref{dp} ) $$\dfrac{1}{m}I(X;Y)\geq \dfrac{1}{m}I(\mathcal{P}^{m}(X); Y)\leq c(s-t) \log (1/\epsilon)-cT(s-t).$$ This proves that for any $\epsilon>0$ with $2\epsilon\log (1/\epsilon)\leq \delta/10$ $$ R(d, \mu, \epsilon)\geq c(s-t) \log (1/\epsilon)-cT(s-t).$$ Thus we get $(\ref {eq5})$: $$\underline{\rm rdim}(\mathcal{X}, T, d, \mu)=\liminf\limits_{ \epsilon \rightarrow 0}\dfrac{R(d, \mu, \epsilon)}{\log (1/\epsilon)}\geq c(s-t).$$ This establishes the proof of the theorem. \end{proof} \section{Proof of Theorem \ref{aa}}\label{d} In this section, we give some results on combinatorial topology and dynamical tiling construction. We prove the following conclusion. \begin{theorem}\label{aa}
If $(\mathcal{X}, T)$ has the marker property and there exists $K>0$ such that $|\varphi_{n+1}-\varphi_{n}|<K$ for every $n$, then there exists a metric ${ d} \in \mathcal{D}(\mathcal{X})$ metric satisfying
$$ {\rm mdim}(\mathcal{X}, T, \mathcal{F})=\overline{\rm mdim}_{M}(\mathcal{X}, T, d, \mathcal{F}).$$ \end{theorem}
\subsection{Preparations on combinatorial topology}
In this subsection we prepare some definitions and results about simplicial complex. Recall that we have assumed that simplicial complexes are always finite (having only finitely many vertices).
Let $P$ be a simplicial complex. We denote by ${\rm Ver}(P)$ the set of vertices of $P.$ For a vertex $v$ of $P$ we define the $\bf{open\ star}$ $O_{P}(v)$ as the union of open simplexes of $P$ one of whose vertex is $v$. Here $\{v\}$ itself is an open simplex. So $O_{P}(v)$ is an open neighborhood of $v$, and $\{O_{P}(v)\}_{v\in {\rm Ver}(P)}$ forms an open cover of $P.$ For a simplex $\Delta\subset P$ we set $O_{P}(\Delta)=\bigcup_{v\in {\rm Ver}(\Delta)O_{P}(v)}.$ \begin{defn}
Let $P$ and $Q$ be simplicial complexes. A map $f: P\rightarrow Q$ is said to be ${simplicial}$ if for every simplex $\Delta\subset P$ the image $f(\Delta)$ is a simplex in $Q$ and
\begin{equation*}
f(\sum_{v\in {\rm Ver}(\Delta)}\lambda_{v}v)=\sum_{v\in {\rm Ver}(\Delta)}\lambda_{v}f(v),
\end{equation*}
where $0\leq \lambda_{v}\leq 1 $ and $\sum_{v\in {\rm Ver}(\Delta)}\lambda_{v}= 1.$ \end{defn} \begin{defn}
Let $V$ be a real vector. A map $f:P\rightarrow V$ is said to be ${linear}$ if for every simplex $\Delta\subset P$
\begin{equation*}
f(\sum_{v\in {\rm Ver}(\Delta)}\lambda_{v}v)=\sum_{v\in {\rm Ver}(\Delta)}\lambda_{v}f(v),
\end{equation*}
where $0\leq \lambda_{v}\leq 1 $ and $\sum_{v\in {\rm Ver}(\Delta)}\lambda_{v}= 1.$ \end{defn} We denote the space of linear maps $f:\ P\rightarrow V$ by Hom$(P,V).$ When $V$ is a Banach space, the space Hom$(P,V)$ is topologized as a product space $V^{{\rm Ver}(P)}.$ \begin{lem}\label{key}\cite{LT}
Let $(V, ||\cdot||)$ ba a Banach space and $P$ a simplicial complex.\\
(1) If $f: P\rightarrow V$ is a linear map with $diam f(P)\leq 2$ then for any $0< \epsilon \leq 1$
\begin{equation*}
\#(f(P),||\cdot||,\epsilon)\leq C(P)\cdot(1/\epsilon)^{dim P}.
\end{equation*}
Here the left-hand side is the minimum cardinality of open covers $\mathcal{U}$ of $f(P)$ satisfying $diam\ U< \epsilon$ for all $U\in \mathcal{U}$. $C(P)$ is a positive constant depending only on $dim P$ and the number of somplexes of $P.$\\
(2) Suppose $V$ is infinite dimensional. Then the set
\begin{equation}
\{f\in Hom(P,V)| f\ is\ \text{injective}\}\label{xiaoming}
\end{equation}
is dense in Hom(P,V).\\
(3) Let $(\mathcal{X}, {d})$ be a compact metric space and $\epsilon,\ \delta> 0.$ Let $\pi:\ \mathcal{X}\rightarrow P$ be a continuous map satisfying $diam \pi^{-1}(O_{P}(v))< \epsilon$ for all $v\in {\rm Ver}(P).$ Let $\pi:\ \mathcal{X}\rightarrow V$ be a continuous map such that
\[
{{d}} (x,y)<\epsilon \Longrightarrow ||f(x)-f(y)||<\delta.
\]
Then there exists a linear map $g:\ P\rightarrow V$ satisfying
\begin{equation*}
||f(x)-g(\pi(x))||< \delta
\end{equation*}
for all $x\in \mathcal{X}.$ Moreover if $f(\mathcal{X})$ is contained in the open unit ball $B_{1}^{o}(V)$ then we can assume $g(P)\subset B_{1}^{o}(V).$ \end{lem}
\begin{defn}
Let $f:\ \mathcal{X}\rightarrow P$ be a continuous map from a topological space $\mathcal{X}$ to a simplicial complex $P.$ It is said to be essential if there is no proper subcomplex of $P$ containing $f(\mathcal{X})$. This is equivalent to the condition that for any simplex $\Delta \subset P$
\begin{equation*}
\bigcap_{v\in {\rm Ver}(\Delta)}f^{-1}(O_{P}(v))\neq \emptyset.
\end{equation*} \end{defn}
\begin{lem}\label{lem4}\cite{LT}
Let $f:\ \mathcal{X}\rightarrow P$ be a continuous map from a topological space $\mathcal{X}$ to a simplicial complex $P.$ There exists a subcomplex $P'\subset P$ such that $f(\mathcal{X})\subset P'$ and $f:\ \mathcal{X}\rightarrow P'$ is essential. \end{lem} For two open covers $\mathcal{U}$ and $\mathcal{V}$ of $\mathcal{X}$, we say that $\mathcal{V}$ is refinement of $\mathcal{U}$ (denoted by $\mathcal{U}\prec \mathcal{V}$) if for every $V\in \mathcal{V}$ there exists $U\in \mathcal{U}$ containing $V$. \begin{lem}\label{lem5.3}\cite{LT}
Let $\mathcal{X}$ be a topological space, $P$ and $Q$ simplicial complexes. Let $\pi:\ \mathcal{X}\rightarrow P$
and $q_{i}:\ \mathcal{X}\rightarrow Q$ $(1\leq i\leq N)$ be continuous maps. We suppose that $\pi$ is essential and satisfies for all $1\leq i\leq N$
\begin{equation*}
\{q_{i}^{-1}(O_{Q}(w))\}_{w\in {\rm Ver}(Q)}\prec \{\pi^{-1}(O_{P}(v))\}_{v\in {\rm Ver}(P)}\ (\text{as open covers of}\ \mathcal{X}).
\end{equation*}
Then there exist simplicial maps $h_{i}:\ P\rightarrow Q$ $(1\leq i\leq N)$ satisfying the following three conditions.\\
(1) For all $1\leq i\leq N$ and $x\in \mathcal{X}$ the two points $q_{i}(x)$ and $h_{i}(\pi(x))$ belong to the same complex of $Q$.\\
(2) Let $1\leq i\leq N$ and let $Q' \subset Q$ be a subcomplex. If a simplex $\Delta \subset P$ satisfies $\pi^{-1}(O_{P}(\Delta))\subset q_{i}^{-1}(Q')$ then $h_{i}(\Delta)\subset Q'$.\\
(3) Let $\Delta \subset P$ be a simplex. If $q_{i}=q_{j}$ on $\pi^{-1}(O_{P}(\Delta))$ then $h_{i}=h_{j}$ on $\Delta$. \end{lem} \subsection{Dynamical tiling construction}\label{5.2} The purpose of this subsection is to define a "dynamical decomposition" of the real line, which was first introduced in \cite{GLT16}. This will be the basis of the construction in the proof of Theorem 1.8.
Let $(\mathcal{X}, T)$ be a dynamical system and $\psi:\ \mathcal{X}\rightarrow [0,1]$ a continuous function. Take $x\in \mathcal{X}$. We consider \begin{equation}\label{xiaohua}
\{(a, \frac{1}{\psi(T^{a}x)})| a\in \mathbb{Z} \ \text{with} \ \psi(T^{a}x)> 0\}. \end{equation} This is a discrete subset of the plane. We assume that (\ref{xiaohua}) is nonempty for every $x\in \mathcal{X}$. Namely for every $x\in \mathcal{X}$ there exists a $a\in \mathbb{Z}$ with $\psi(T^{a}x)> 0$. Let $\mathbb{R}^{2}=\bigcup_{a\in \mathbb{Z}}V_{\psi}(x, a)$ be the associated $\bf{Voronoi\ diagram},$ where $V_{\psi}(x,a)$ is the (convex) set of $u\in \mathbb{R}^{2}$ satisfying \begin{equation*}
|u-(a,\frac{1}{\psi(T^{a}x)})|\leq |u-(b, \frac{1}{\psi(T^{b}x)})| \end{equation*} for any $b\in \mathbb{Z}$ with $\psi(T^{b}x)> 0.$ (If $\psi(T^{a}x)=0$ then $V_{\psi}(x,a)$ is empty.) We set \begin{equation*} I_{\psi}(x,a)= V_{\psi}(x,a)\cap (\mathbb{R}\times \{0\}). \end{equation*} See Figure in \cite{LT}. We naturally identity $\mathbb{R}\times \{0\}$ with $\mathbb{R}.$ This provides a decomposition of $\mathbb{R}:$ \begin{equation*} \mathbb{R}= \bigcup_{a\in \mathbb{Z}}I_{\psi}(x,a). \end{equation*} We set \begin{equation*} \partial_{\psi}(x)=\bigcup_{a\in \mathbb{Z}}\partial I_{\psi}(x,a)\subset \mathbb{R}, \end{equation*} where $\partial I_{\psi}(x,a)$ is the boundary of $I_{\psi}(x,a)$ (e.g. $\partial[0,1]=\{0,1\}$). This construction is equivariant: \begin{equation*} I_{\psi}(T^{n}x, a)= -n+I_{\psi}(x, a+n),\ \partial_{\psi}(T^{n}x)=-n+\partial_{\psi}(x). \end{equation*} Recall that a dynamical system $(\mathcal{X}, T)$ is said to satisfy the marker property if for every $N>0$ there exists an open set $U\subset \mathcal{X}$ satisfying \begin{equation}\label{xiaohong} U\cap T^{-n}U=\emptyset\ (1\leq n\leq N), \ \mathcal{X}=\bigcup_{n\in \mathbb{Z}}T^{-n}U. \end{equation} \begin{lem}\label{tiling}\cite{LT}
Suppose $(\mathcal{X}, T)$ satisfies the marker property. Then for any $\epsilon > 0$ we can find a continuous function $\psi:\ \mathcal{X}\rightarrow [0,1]$ such that (\ref{xiaohua}) is nonempty for every $x\in \mathcal{X}$
and that it satisfies that following two conditions.\\
(1) There exists $M > 0$ such that $I_{\psi}(x,a)\subset (a-M, a+M)$ for all $x\in \mathcal{X}$ and $a\in \mathbb{Z}$. The intervals $I_{\psi}(x,a)$ depend continuously on $x\in \mathcal{X}$, namely if $I_{\psi}(x,a)$
has positive length and if $x_{k}\rightarrow x$ in $\mathcal{X}$ then $I_{\psi}(x_{k},a)$ converges to $I_{\psi}(x,a)$ in the Hausdorff topology.
(2) The sets $\partial_{\psi}(x)$ are sufficiently "sparse" in the sense that
\begin{equation}
\lim_{R\rightarrow \infty}\frac{\sup_{x\in \mathcal{X}}|\partial_{\psi}(x)\cap [0, R]|}{R}< \epsilon.
\end{equation}
Here $|\partial_{\psi}(x)\cap [0, R]|$ is the cardinality of $\partial_{\psi}(x)\cap [0, R].$ \end{lem} \subsection{Proof of Theorem \ref{le}}
Theorem \ref{aa} follows from following theorem. For a topological space $\mathcal{X}$ and a Banach space $(V, \|\cdot\|)$ we denote by $C(X, V)$ the space of the continuous maps $f: \mathcal{X} \rightarrow V$ endowed the norm topology (i.e., the topology given by the metric $\sup_{x\in \mathcal{X}}\|f(x)-g(x)\|$). For convenience, we also give proof of Theorem \ref{le}. \begin{theorem}\label{le}
Let $(\mathcal{X}, T)$ be a dynamical system with a sub-additive potential $\mathcal{F}=\left\lbrace \varphi_{n}\right\rbrace_{n=1}^{\infty} $, and let $(V, \left\| \cdot\right\| )$ be an infinite dimension Banach space. Suppose $(\mathcal{X}, T)$ has the marker property and there exists $K>0$ such that $|\varphi_{n+1}-\varphi_{n}|<K$ for every $n$. Then for a dense subset $f\in C(\mathcal{X}, V)$, $f$ is a topological embedding and satisfies
$${ \overline{{\rm mdim_{M}}}}(\mathcal{X}, T, f^{*}\left\| \cdot\right\|, \mathcal{F} )={\rm mdim}(\mathcal{X}, T, \mathcal{F}).$$
Here $f^{*}\left\| \cdot \right\| $ is the metric $\left\| f(x)-f(y)\right\| ~(x, y\in \mathcal{X})$. \end{theorem} \begin{proof}
First we introduce some notations. For a natural number $N$ we set $[N]=\left\lbrace 0, 1, 2, \cdots, N-1\right\rbrace $. We define a norm on $V^{N}$ (the $n$-th power of $V$) by
$$ \left\| (x_{0}, x_{1}, \cdots, x_{N-1})\right\|_{N}=\max\left\lbrace \left\|x_{0} \right\|, \left\|x_{1} \right\|, \cdots, \left\|x_{N-1} \right\| \right\rbrace. $$ For simplicial complexes $P$ and $Q$ we define their join $P*Q$ as the quotient space of $[0,1]\times P \times Q$ by the equivalence relation
$$ (0, p, q)\sim (0, p, q'),~~(1, p, q)\sim (1,p', q),~~(p, p' \in P, q,q'\in Q).$$
We denote the equivalence class of $(t, p, q)$ by $(1-t)p\oplus tq$. We identify $P$ and $Q$ with $\left\lbrace (0, p, *)| p\in P\right\rbrace $ and $\left\lbrace (1, *. q) \right\rbrace $ in $P*Q$ respectively. For a continuous map $f: \mathcal{X} \rightarrow V$ and $I \subset \mathbb{R}$ we define $\Phi_{f, I}(x): \mathcal{X} \rightarrow V^{I\cap \mathbb{Z}}$ by
$$\Phi_{f, I}(x)=(f(T^{a}x))_{a\in I\cap \mathbb{Z}}.$$
For a natural number $R$ we set $\Phi_{f, R}:=\Phi_{f , [R]}: \mathcal{X}\rightarrow V^{R}$. We denote by $\Phi_{f, R}^{*}\left\| \cdot\right\|_{R} $ the semi-metric $\left\| \Phi_{f , [R]}(x)-\Phi_{f , [R]}(y) \right\| $ on $\mathcal{X}$. For a semi-metric $d'$ on $\mathcal{X}$ and $\epsilon>0$ we define
\begin{align*}
\#(\mathcal{X}, d', \varphi, \epsilon)=\inf \{ \sum_{i=1}^{n} (1/\epsilon)^{\sup_{U_{i}}\varphi}\mid ~ \mathcal{X}=& U_{1} \cup \cdots \cup U_{n} ~\text{ is an open cover with } \\ & {\rm diam} ~U_{i} < \epsilon ~\text{for all} ~1\leq i \leq n \}.
\end{align*}
where ${\rm diam}(U_{i}, d') $ is the supremum of $d'(x, y)$ over $x, y \in U_{i}$. We fix a continuous function $\alpha: \mathbb{R} \rightarrow [0,1]$ such that $\alpha(t)=1$ for $t\leq 1/2$ and $\alpha(t)=0$ for $t\geq 3/4.$
We can assume $D={\rm mdim}(\mathcal{X}, T, \mathcal{F})<\infty.$ Fix a metric $d$ on $\mathcal{X}$. Take an arbitrary continuous map $f: \mathcal{X} \rightarrow V$ and $\eta>0$. Our purpose is to construct a topological embedding $f': \mathcal{X} \rightarrow V$ satisfying $\left\| f(x) -f'(x)\right\|< \eta $ and ${\overline{{\rm mdim_{M}}}}(\mathcal{X}, T, f^{'*}\left\|\cdot \right\|, \mathcal{F} )\leq D.$ We may assume that $f(\mathcal{X})$ is contained in the open unit ball $B_{1}^{\circ}(V)$. We will inductively construct the following data for $n\geq 1$.
\begin{itemize}
\item [(1)] $1/2>\epsilon_{1}>\epsilon_{2}>\cdots>0$ with $\epsilon_{n+1}<\epsilon_{n}/2$ and $\eta /2>\delta_{1}>\delta_{2}>\cdots>0$ with $\delta_{n+1}<\delta_{n}/2.$
\item [(2)] A natural number $N_{n}.$
\item [(3)] A continuous function $\psi_{n}: \mathcal{X} \rightarrow [0,1]$ such that for every $x \in \mathcal{X}$ there exists $a \in \mathbb{Z}$ satisfying $\psi_{n}(T^{a}x)>0$. We apply the dynamical tiling construction of subsection \ref{5.2} to $\psi_{n}$ and get the decomposition $\mathbb{R}=\bigcup\limits_{a\in \mathbb{Z}}I_{\psi_{n}}(x,a)$ for each $x \in \mathcal{X}$.
\item [(4)] $(1/ n)$-embeddings $\pi_{n}: (\mathcal{X}, d_{N_{n}}) \rightarrow P_{n}$ and $\pi'_{n}: (\mathcal{X}, d) \rightarrow Q_{n}$ with simplicial complexes $P_{n}$ and $Q_{n}$.
\item [(5)] For each $\lambda \in [N_{n}]$, a linear map $g_{n, \lambda}: P_{n} \rightarrow B_{1}^{\circ}(V).$
\item [(6)] A linear map $g_{n}^{'}: Q_{n} \rightarrow B_{1}^{\circ}(V).$
\end{itemize}
We assume the following six conditions. \begin{con}\label{con}
\begin{enumerate}
\item [(1)] For each $\lambda\in [N_{n}],$ the map $g_{n, \lambda}* g_{n}^{'}(P_{n}*Q_{n}): P_{n}* Q_{n}\rightarrow B_{1}^{\circ}(V)$ is injective. For $\lambda_{1}\neq \lambda_{2}$,
$$g_{n, \lambda_{1}}* g_{n}^{'}(P_{n}*Q_{n})\cap g_{n, \lambda_{2}}* g_{n}^{'}(P_{n}*Q_{n})=g_{n}'(Q_{n}).$$
\item [(2)] Set $g_{n}=(g_{n,0}, g_{n,1}, \cdots, g_{n, N_{n}-1}): P_{n}\rightarrow V^{N_{n}}$. We assume that $\pi_{n}$ is essential and
$$\sum \limits_{\Delta \subset P_{n}}\left( \dfrac{1}{\epsilon}\right) ^{\sup_{\pi_{n}^{-1}(O_{P_{n}}(\Delta))}\varphi_{N}} \#(g_{n}(\Delta), \|\cdot\|_{N_{n}}, \epsilon)<\left( \dfrac{1}{\epsilon}\right) ^{(D+\frac{3}{n}))N_{n}},~~~(0<\epsilon\leq \epsilon_{n}).$$
Here $\Delta$ runs overs simplexes of $P_{n}$. Since $\pi_{n}$ is essential, $\pi_{n}^{-1}(O_{P_{n}}(\Delta))$ is non-empty for every $\Delta \subset P_{n}$.
\item[(3)] For $0<\epsilon\leq \epsilon_{n-1}$ $(n\geq 2)$,
$$\#(\mathcal{X}, (g_{n}\circ \pi_{n})^{*}\|\cdot\|_{N_{n}}, \varphi_{N}, \epsilon)<2^{N_{n}}\left( \dfrac{1}{\epsilon}\right) )^{(D+\frac{4}{n-1}))N_{n}}.$$
Here $(g_{n}\circ\pi_{n})^{*}\|\cdot\|_{N_{n}}$ is the semi-metric $\| g_{n}(\pi_{n}(x))-g_{n}(\pi_{n}(y))\|$ on $\mathcal{X}$.
\item[(4)] There exists $M_{n}>0$ such that $I_{\psi_{n}}(x, a)\subset (a-M_{n}, a+M_{n})$ for all $x\in \mathcal{X}$ and $a\in \mathbb{Z}$. We take $C_{n}\geq 1$ satisfying
\begin{align}\label{cos}
\#\left( \bigcup\limits_{\lambda\in [N_{n}]} g_{n,\lambda}*g_{n}^{'}(P_{n}*Q_{n},\|\cdot\|,\epsilon)\right) <\left( \dfrac{1}{\epsilon}\right)^{C_{n}}~~(0<\epsilon\leq \dfrac{1}{2}).
\end{align}
Then we assume
$$\lim\limits_{R\rightarrow\infty}\dfrac{\sup_{x\in \mathcal{X}}|\partial_{\psi}(x)\cap [0,R]|}{R}<\dfrac{1}{2nN_{n}(C_{n}+\| \varphi_{1}\|_{\infty}+K)}.$$
where $\| \varphi_{1}\|_{\infty}=\max_{\mathcal{X}}| \varphi_{1}(x)|.$
\item[(5)] We define a continuous map $f_{n}: \mathcal{X}\rightarrow B_{1}^{\circ}(V)$ as follows. Let $x\in \mathcal{X}$. Take $a\in \mathbb{Z}$ with $0\in I_{\psi_{n}}(x, a)$, and take $b\in \mathbb{Z}$ satisfying $b\equiv a({\rm mod} N_{n})$ and $0\in b+N_{n}$. We set
\begin{align}\label{5.6}
f_{n}(x)=\left\lbrace 1-\alpha({\rm dist}(0, \partial_{\psi_{n}}(x)))\right\rbrace g_{n, -b}(\pi_{n}((T^{b}x)))+\alpha({\rm dist}(0, \partial_{\psi_{n}}(x))) g_{n}'(\pi_{n}'(x)),
\end{align}
where ${\rm dist}(0, \partial_{\psi_{n}}(x))=\min_{t\in \partial_{\psi_{n}(x)}}|t|.$ Then we assume that if a continuous map $f': \mathcal{X}\rightarrow V$ satisfies $\|f(x)- f'(x)\|<\delta_{n}$ for all $x\in \mathcal{X}$ then it is a $(1/n)$-embedding with respect to $d$.
\end{enumerate} \end{con} Suppose that we have constructed the above data. We define a continuous map $f':\mathcal{X} \rightarrow V$ by $f'(x)=\lim\limits_{n\rightarrow \infty}f_{n}(x)$. It satisfies
$\|f'(x)-f(x)\|<\eta$ and $\|f'(x)-f_{n}(x)\|<\min(\epsilon_{n}/4, \delta_{n})$ for all $n\geq 1$. Then the condition $(5)$ implies that $f'$ is a $(1/n)$-embedding with respect to $d$ for all $n\geq 1$, which means that $f'$ is a topological embedding. We estimate \begin{align*}
{\rm mdim}_{M}(\mathcal{X}, T, (f')^{*}\|\cdot\|, \mathcal{F})=\limsup\limits_{ \epsilon \rightarrow 0}\left\lbrace \left(\lim\limits_{R\rightarrow \infty}\dfrac{\log \# (\mathcal{X}, \Phi_{f',R}^{*}\|\cdot\|_{R}, \varphi_{R}, \epsilon)}{R} \right)/\log (1/\epsilon) \right\rbrace. \end{align*}
Let $0<\epsilon<\epsilon_{1}$. Take $n\geq 1$ with $\epsilon_{n}<\epsilon<\epsilon_{n-1}$. From $\|f'(x)-f_{n}(x)\|<\epsilon_{n}/4$,
$$\#(\mathcal{X}, \Phi_{f',R}^{*}\|\cdot\|_{R}, \varphi_{R}, \epsilon)\leq \#(\mathcal{X}, \Phi_{f_{n},R}^{*}\|\cdot\|_{R}, \varphi_{R}, \epsilon-\dfrac{\epsilon_{n}}{2})\leq \#(\mathcal{X}, \Phi_{f_{n},R}^{*}\|\cdot\|_{R}, \varphi_{R}, \dfrac{\epsilon}{2}).$$ From Claim \ref{num1} below,
$$ \lim\limits_{R\rightarrow \infty} \dfrac{\log \# (\mathcal{X}, \Phi_{f',R}^{*}\|\cdot\|_{R}, \varphi_{R}, \epsilon)}{R} \leq 2+ (D+\dfrac{4}{n-1}+\dfrac{1}{n})\log \left( \dfrac{2}{\epsilon}\right). $$
Since $n\rightarrow \infty$ as $\epsilon\rightarrow 0$, this proves $\overline{{\rm mdim_{M}}}(\mathcal{X}, T, (f')^{*}, \|\cdot\|, \mathcal{F})$. \begin{claim}\label{num1}
Let $0<\epsilon<\epsilon_{n-1} ~(n\geq 2)$. If $R$ is a sufficiently large natural number then
$$\#(\mathcal{X}, \Phi_{f_{n},R}^{*}\|\cdot\|_{R}, \varphi_{R},\epsilon)\leq 4^{R}\left( \dfrac{1}{\epsilon}\right)^{(D+\frac{4}{n-1})R+\frac{R}{n}} $$ \end{claim} \begin{proof}
Let $x\in \mathcal{X}$. A discrete interval $J=[b, b+N_{n})\cap \mathbb{Z}$ of length $N_{n}$ $(b \in \mathbb{Z})$ is said to be ${\bf good~ for}$ $x$ if there exists $a\in\mathbb{Z}$ such that $b\equiv a ({\rm mod} N_{n})$ and $[b-1, b+N_{n}]\subset I_{\Psi_{n}}(x, a)$. If $J$ is good for $x$ then
$$\Phi_{f_{n},J}(x)=g_{n}(\pi_{n}(T^{b}x))\in g_{n}(P_{n}).$$
We denote by $\mathcal{J}_{x}$ the union of $J\subset [R]$ which are good for $x$. For a subset $\mathcal{J} \subset [R]$ we define $\mathcal{X}_{\mathcal{J}}$ as the set of $x\in \mathcal{X}$ satisfying $\mathcal{J}_{x}=\mathcal{J}.$ The set $\mathcal{X}_{\mathcal{J}}$ may be empty. If it is non-empty, then from Condition $\ref{con}$ (3)
\begin{align}\label{num}
\#(\mathcal{X}_{\mathcal{J}}, \Phi_{f_{n},R}^{*}\|\cdot\|_{R}, \varphi_{R}, \epsilon )\leq \left\lbrace 2^{N_{n}}\left( \dfrac{1}{\epsilon}\right) ^{(D+\frac{4}{n-1})N_{n}}\right\rbrace^{|\mathcal{J}|/N_{n}} \cdot \left( \dfrac{1}{\epsilon}\right)^{(C_{n}+ \| \varphi_{1}\|_{\infty}+K)|[R]\setminus \mathcal{J}|}
\end{align}
Here $C_{n}$ is the positive constant introduced in (\ref{cos}). We have $|\mathcal{J}|\leq R$ and $$|[R]\setminus \mathcal{J}|\leq 2N_{n}\sup\limits_{x\in \mathcal{X}}|\partial_{\Psi_{n}(x)}\cap [0,R]|+2N_{n}.$$
The second term $"+2N_{n}"$ in the right-hand side is the edge effect. From Condition \ref{con} (4), for sufficiently larger $R$
$$(C_{n}+\| \varphi_{1}\|_{\infty}+K)|[R]\setminus \mathcal{J}|<\dfrac{R}{n}.$$
Then the quantity (\ref {num}) is bounded by
$$ 2^{R}\left( \dfrac{1}{\epsilon}\right)^{(D+\frac{4}{n-1}))R+\frac{R}{n}}.$$
The number of the choices of $\mathcal{J} \subset [R]$ is bounded by $2^{R}$. Thus
$$\#(\mathcal{X}, \Phi_{f_{n},R}^{*}\|\cdot\|_{R}, \varphi_{R}, \epsilon ) \leq 4^{R}\left( \dfrac{1}{\epsilon}\right)^{(D+\frac{4}{n-1})R+\frac{R}{n}}.$$ \end{proof} {\bf Induction: Step 1.} Now we start to construct the data. First we construct them for $n=1$. By the continuity of $f$ and ${\rm mdim}(\mathcal{X}, T, \mathcal{F})=D$. Take small enough $0<\tau_{1}<1$, there exists $N_{1}>0$, a simplicial complex $P_{1}$ and a $\pi_{1}$-embedding $\pi_{1}: (\mathcal{X}, d_{N_{1}})\rightarrow P_{1}$ such that \begin{itemize}
\item
$ d(x, y)<\tau_{1}\Rightarrow ~\|f(x)-f(y)\|<\dfrac{\eta}{2}.$
\item ${\rm dim}_{\pi_{1}(x)}P_{1}+ \varphi_{N_{1}}(x)<N_{1}(D+1)$ for all $x\in \mathcal{X}$.
\item $\dfrac{{\rm var}_{\tau_{1}}( \varphi_{N_{1}}, d_{N_{1}})}{N_{1}}< 1,$
where ${\rm var}_{\epsilon}(\varphi, d)=\sup \left\lbrace |\varphi(x)-\varphi(y)|, ~d(x, y)<\epsilon \right\rbrace $. \end{itemize}
We also take a simplicial complex $Q_{1}$ and a $\tau_{1}$-embedding $\pi_{1}^{'}: (\mathcal{X},d) \rightarrow Q_{1}$. By subdividing $P_{1}$ and $Q_{1}$ if necessary, we can assume that all simplexes $\Delta \subset P_{1}$ and all $\omega \in {\rm Ver(Q_{1})}$ $${\rm diam}(\pi_{1}^{-1}(O_{P_{1}}(\Delta)), d_{N_{1}}))<\tau_{1},~~{\rm diam}((\pi_{1}^{'})^{-1}(O_{Q_{1}}(\omega)), d))<\tau_{1}.$$ Moreover by Lemma \ref{lem4} we can assume that $\pi_{1}$ is essential. By Lemma \ref{key} (3) there exist linear maps $g_{1, \lambda}: P_{1} \rightarrow B_{1}^{\circ}(V)$ $(\lambda \in [N_{1}])$ and $g_{1}^{'}: Q_{1} \rightarrow B_{1}^{\circ}(V)$ satisfying \begin{align}\label{5.7}
\|f(T^{\lambda}x)-g_{1, \lambda}(\pi_{1}(x))\|<\dfrac{\eta}{2},~~\| f(x)-g_{1}^{'}(\pi_{1}(x))\|<\dfrac{\eta}{2}. \end{align} We slightly perturb $g_{1, \lambda}$ and $g_{1}^{'}$ (if necessary ) by Lemma \ref{key} (2) so that they satisfy Condition \ref{con} (1). By Lemma \ref{key} (1), we can choose $0<\epsilon_{1}<1/2$ such that for any $0<\epsilon\leq \epsilon_{1}$ and simplex $\Delta \subset P_{1}$
$$\#(g_{1}(\Delta), \|\cdot\|_{N_{1}}, \epsilon))<\dfrac{1}{(\text{Number of simplexes of}~ P_{1})}\left( \dfrac{1}{\epsilon} \right)^{{\rm dim}\Delta+1}. $$ Let $\Delta \subset P_{1}$ be a simplex. Since $\pi_{1}$ is essential, we can find a point $x\in \pi_{1}^{-1}(O_{P_{1}}(\Delta))$ with ${\rm dim}(\Delta)\leq {\rm dim}_{\pi_{1}(x)}P_{1}.$ From the choice of $\tau_{1}$ $$ \sup\limits_{\pi_{1}^{-1}(O_{P_{1}}(\Delta))} \varphi_{N_{1}}\leq \varphi_{N_{1}}(x)+ N_{1}.$$ Hence for $0<\epsilon\leq \epsilon_{1}$ \begin{align*}
&\left( \dfrac{1}{\epsilon} \right)^{\sup_{\pi_{1}^{-1}(O_{P_{1}}(\Delta))} \varphi_{N_{1}}} \#(g_{1}(\Delta), \|\cdot\|_{N_{1}}, \epsilon)\\& <\dfrac{1}{(\text{Number of simplexes of }P_{1}) }\left( \dfrac{1}{\epsilon}\right)^{{\rm dim}(\Delta)+ \varphi_{N_{1}}+ N_{1}+1}\\&\leq \dfrac{1}{(\text{Number of simplexes of } P_{1})}\left( \dfrac{1}{\epsilon}\right)^{{\rm dim}_{\pi_{1}(x)}P_{1}+ \varphi_{N_{1}}+ N_{1}+1} \end{align*} From ${\rm dim}_{\pi_{1}(x)}P_{1}+ \varphi_{N_{1}}(x)<N_{1}(D+1)$, this is bounded by \begin{align*} &\dfrac{1}{(\text{Number of simplexes of } P_{1})}\left( \dfrac{1}{\epsilon}\right)^{N_{1}(D+1)+ N_{1}+1}\\&\leq \dfrac{1}{(\text{Number of simplexes of } P_{1})}\left( \dfrac{1}{\epsilon}\right)^{N_{1}(D+3)} \end{align*} This shows Condition \ref{con} (2):
$$ \sum\limits_{\Delta \subset P_{1}} \left( \dfrac{1}{\epsilon}\right)^{\sup_{\pi_{1}^{-1}(O_{P_{1}}(\Delta))}}\#(g_{1}(\Delta, \|\cdot\|_{N_{1}}, \epsilon))<\left( \dfrac{1}{\epsilon} \right)^{N_{1}(D+3)}. $$
Condition \ref{con} (3) is empty for $n=1$. By Lemma \ref{tiling} we can choose a continuous function $\Psi_{1}: \mathcal{X} \rightarrow [0,1]$ satisfying Condition \ref{con} (4). The continuous map $f_{1}: \mathcal{X} \rightarrow V$ defined in (\ref{5.6}) is a 1-embedding. Since "1-embedding" is an open condition, we can choose $0<\delta_{1}<\eta/2$ such that any continuous map $f': \mathcal{X} \rightarrow V$ with $\|f'(x)- f_{1}(x)\|<\delta_{1}$ is also a 1-embedding. This establishes Condition \ref{con} (5). From (\ref{5.7}) we get Condition \ref{con} (6): $$\|f(x)-f_{1}(x)\|<\eta/2.$$ We have completed the construction of the data for $n=1$.
{\bf Induction: Step n $ \rightarrow$ Step n+1}
Suppose we have constructed the data for $n$. We will construct the data for $n+1$. We subdivide the join $P_{n}*Q_{n}$ sufficiently fine (denote by $\overline{P_{n}*Q_{n}}$) such that for all simplexes $\Delta \subset \overline{P_{n}*Q_{n}}$ and all $\lambda\in [N_{n}]$ \begin{align}\label{5.9}
{\rm diam}(g_{n, \lambda}*g_{n}^{'}(\Delta), \|\cdot\|)<\min\left( \dfrac{\epsilon_{n}}{8}, \dfrac{\delta_{n}}{2}\right). \end{align} We define a continuous map $q_{n}:\mathcal{X} \rightarrow \overline{P_{n}*Q_{n}}$ as follows. Let $x\in \mathcal{X}$. Take $a, b \in \mathbb{Z}$ such that $0\in I_{\Psi_{n} }(x, a)$, $a\equiv b ({\rm mod} N_{n})$ and $0\in b+[N_{n}]$. Then we set $$q_{n}(x)=\left\lbrace 1-\alpha({\rm dist}(0, \partial_{\psi_{n}}(x))\right\rbrace\pi_{n}(T^{b}x)\oplus\alpha({\rm dist}(0, \partial_{\psi_{n}}(x))) \pi_{n}^{'}(x). $$ We have \begin{align}\label{5.10} f_{n}(x)=g_{n, -b}*g_{n}^{'}(q_{n}(x)). \end{align}
Take $0<\tau_{n+1}<1/n+1$ satisfying the following four conditions. \begin{itemize}
\item [(i)] If $d(x,y)<\tau_{n+1}$ then $\| f_{n}(x)-f_{n}(y)\|<\min(\epsilon_{n}/8, \delta_{n}/2)$.
\item [(ii)] If $d(x,y)<\tau_{n+1}$ then then the decompositions of dynamical tiling are "close" in the following two senses.
\begin{itemize}
\item [$\cdot$]$|{\rm dist}(0, \partial_{\psi_{n}}(x))-{\rm dist}(0, \partial_{\psi_{n}}(y))|<\dfrac{1}{4}$
\item [$\cdot$] If $(-1/4, 1/4)\subset I_{\Psi_{n}}(x, a)$ then $0$ is an interior point of $I_{\Psi_{n}}(y, a)$.
\end{itemize}
\item[(iv)] Consider the open cover $\left\lbrace q_{n}^{-1} (\overline{P_{n}*Q_{n}}(v))\right\rbrace_{v\in {\rm Ver}(\overline{P_{n}*Q_{n}})} $ of $\mathcal{X}$. The number $\tau_{n+1}$ is smaller than its Lebesgue number:
$$ \tau_{n+1}<LN\left( \mathcal{X}, d, \left\lbrace q_{n}^{-1} (\overline{P_{n}*Q_{n}}(v))\right\rbrace_{v\in {\rm Ver}(\overline{P_{n}*Q_{n}})}\right). $$ \end{itemize} Take a $\tau_{n+1}$-embedding $\pi_{n+1}^{'}: (\mathcal{X}, d) \rightarrow Q_{n+1}$ with a simplicial complex $Q_{n+1}$. By subdividing it, we can assume that ${\rm diam}((\pi_{1}^{'})^{-1}(O_{Q_{n+1}}(\omega), d))<\tau_{n+1}$ for all $\omega \in {\rm Ver}(Q_{n+1})$. By Lemma \ref{key} (3) there exists a linear map $\tilde{g}_{n+1}^{'}: Q_{n+1} \rightarrow B_{1}^{\circ}(V)$ satisfying \begin{align}\label{per2}
\| f_{n}(x)-\tilde{g}_{n+1}^{'}(\pi_{n+1}(x))\|<\min\left( \dfrac{\epsilon_{n}}{8}, \dfrac{\delta_{n}}{2}\right). \end{align} Take $N_{n+1}\geq N_{n}$ satisfying two conditions. \begin{itemize}
\item [(a)]
There exists a $\tau_{n+1}$-embedding $\pi_{n+1}: (\mathcal{X}, d_{N_{n+1}}) \rightarrow P_{n+1}$ with a simplex complex $P_{n+1}$ such that for all $x\in \mathcal{X}$
\begin{align}\label{cc}
\dfrac{{\rm dim}_{\pi_{n+1}(x)}P_{n+1}+\varphi_{N_{n+1}}(x)}{N_{n+1}}< D+\dfrac{1}{n+1}.
\end{align}
\item [(b)]
$$\dfrac{1+\sup\limits_{x\in \mathcal{X}}|\partial_{\psi_{n}}(x)\cap [0, N_{n+1}]|}{N_{n+1}}<\dfrac{1}{2nN_{N}(C_{n}+\| \varphi_{1}\|_{\infty}+K)},$$
where $C_{n}$ is the positive constant introduced in (\ref{cos}).
\item[(c)] $$\dfrac{{\rm var}_{\tau_{n+1}}( \varphi_{N_{n+1}}, d_{N_{1}})}{N_{n+1}}< \dfrac{1}{n+1}.$$ \end{itemize}
By subdividing $P_{n+1}$ if necessary, we can assume that for any simplexes $\Delta, \Delta'\subset P_{n+1}$ with $\Delta \cap \Delta' \neq \emptyset$ \begin{align}\label{pi} {\rm diam}(\pi_{n+1}^{-1}(O_{P_{n+1}}(\Delta)\cup\pi_{n+1}^{-1}(O_{P_{n+1}}(\Delta'), d_{N_{n+1}}))<\tau_{n+1}. \end{align} Moreover by Lemma \ref{lem4} we can assume that $\pi_{n+1}$ is essential.
By the choice of $\tau_{n+1}$, we apply Lemma \ref{lem5.3} to $\pi_{n+1}: \mathcal{X} \rightarrow P_{n+1}$ and $q_{n}\circ T^{\lambda}: \mathcal{X} \rightarrow \overline{P_{n}*Q_{n}} (\lambda \in [N_{n+1}]).$ Then we can get simplicial maps $h_{\lambda}: P_{n+1} \rightarrow \overline{P_{n}*Q_{n}} (\lambda \in [N_{n+1}])$ satisfying the three condition: \begin{itemize}
\item [(A)] For every $\lambda \in [N_{n+1}]$ and $x\in \mathcal{X}$, the two points $h_{\lambda}(\pi_{n+1}(x)) $ and $q_{n}(T^{\lambda}x)$ belong to the same simplex of $\overline{P_{n}*Q_{n}}.$
\item [(B)] Let $\lambda \in [N_{n+1}]$ and $\Delta \subset P_{n+1}$ be a simplex. If $\pi_{n+1}^{-1}(O_{P_{n+1}}(\Delta)\subset T^{-\lambda}q_{n}^{-1}(\overline{P_{n}}),$
then $h_{\lambda}(\Delta)\subset \overline{P_{n}}.$ Similarly, if $\pi_{n+1}^{-1}(O_{P_{n+1}}(\Delta)\subset T^{-\lambda}q_{n}^{-1}(\overline{Q_{n}}),$, then $h_{\lambda}(\Delta)\subset \overline{Q_{n}}.$
\item [(C)] Let $\lambda, \lambda' \in [N_{n+1}]$ and $\Delta \subset P_{n+1}$ be a simplex. If $q_{n}\circ T^{\lambda}=q_{n}\circ T^{\lambda'}$ on $\pi_{n+1}^{-1}(O_{P_{n+1}}(\Delta)$ then $h_{\lambda}=h_{\lambda'}$ on $\Delta$. \end{itemize} Define a linear map $\tilde{g}_{n+1,\lambda}: P_{n+1} \rightarrow B_{1}^{\circ}(V)$ for each $\lambda \in [N_{n+1}]$ as follows. For each $\Delta \in P_{n+1}$, since $\pi_{n+1}$ is essential, we can find a point $x\in \pi_{n+1}^{-1}(O_{P_{n+1}}(\Delta))$. Take $a, b \in \mathbb{Z}$ such that $\lambda\in I_{\Psi_{n} }(x, a)$, $a\equiv b ({\rm mod} N_{n})$ and $\lambda\in b+[N_{n}]$.
Set $$\tilde{g}_{n+1,\lambda}(u)=g_{n, \lambda-b}*g_{n}^{'}(h_{\lambda}(u)) ~~(u \in \Delta).$$
From (\ref{5.9}) and (\ref{5.10}), \begin{align}\label{per1}
\|\tilde{g}_{n+1,\lambda}(\pi_{n+1}(x))-f_{n}(T^{\lambda}x)\| <\min\left( \dfrac{\epsilon_{n}}{8}, \dfrac{\delta_{n}}{2}\right). \end{align} \begin{claim}
The above construction of $\tilde{g}_{n+1,\lambda}$ is independent of the various choices. \end{claim} \begin{proof}
see \cite{LT}. \end{proof} \begin{claim}\label{num2}
Set $\tilde{g}_{n+1}=(\tilde{g}_{n+1,0},\cdots, \tilde{g}_{n+1, N_{n+1}-1}): P_{n+1} \rightarrow V^{N_{n+1}}.$ For $0< \epsilon\leq \epsilon_{n}$
$$\#(\mathcal{X}, (\tilde{g}_{n+1}\circ \pi_{n+1})^{*}\|\cdot\|_{N_{n+1}}, \varphi_{N_{n+1}}, \epsilon)< 2^{N_{n+1}}\left( \dfrac{1}{\epsilon} \right)^{(D+\frac{4}{n})N_{n+1}}. $$ \end{claim} \begin{proof}
This is close to the proof of Claim \ref{num1}. But it is a bit more involved. Let $x\in \mathcal{X}$. We say that a discrete interval $J=[b, b+N_{n})\cap \mathbb{Z}$ of length $N_{n} (b \in \mathbb{Z})$ is good for $x$ if $J\subset [N_{n+1}]$ and there exists $a\in \mathbb{Z}$ satisfying $b\eqcirc a ({\rm mod} N_{n} )$ and $[b-1, b+N_{n}]\subset I_{\psi_{n}}(x,a)$.
Suppose $J=[b, b+N_{n}) \cap \mathbb{Z}$ is good for $x\in \mathcal{X}$. Take a simplex $\Delta\subset P_{n+1}$ containing $\pi_{n+1}(x).$ Let $y\in \pi_{n+1}^{-1}(O_{P_{n+1}}(\Delta))$ be an arbitrary point. From $(\ref{pi})$ we have $d_{N_{n+1}}(x, y)<\tau_{n+1}$. From the condition (iii) of the choice of $\tau_{n+1}$,
$$[b-\dfrac{3}{4}, b+N_{n}-\dfrac{1}{4}]\subset I_{\psi_{n}}(y ,a).$$
Then for all $\lambda \in J$
$$q_{n}(T^{\lambda}y)=q_{n}(T^{b}y)=\pi_{n}(T^{b}y)\in \overline{P_{n}}.$$
From the condition $(B)$ and $(C)$ of the choice of $h_{\lambda}$,
$$h_{b}(\Delta)\subset \overline{P_{n}},~~h_{\lambda}=h_{b} ~\text{on}~\Delta ~\text{for }~\lambda\in J.$$
Then $$(\tilde{g}_{n+1, \lambda}(\pi_{n+1}(x)))_{\lambda\in J}=g_{n}(h_{b}(\pi_{n+1}(x))).$$
Moreover it follows from the condition $(A)$ of the choice of $h_{\lambda}$ that $h_{b}(\pi_{n+1}(x))$ and $q_{n}(T^{b}x)=\pi_{n}(T^{b}x)$ belongs to the same simplex of $\overline{P_{n}}$.
For $x\in \mathcal{X}$ we denote by $\mathcal{J}_{x}$ the union of the intervals $J\subset [N_{n+1}]$ good for $x$. For a subset $\mathcal{J}\subset [N_{n+1}]$ we define $\mathcal{X}_{\mathcal{J}}$ as the set of $x\in \mathcal{X}$ with $\mathcal{J}_{x}=\mathcal{J}$. The set $\mathcal{X}_{\mathcal{J}}$ may be empty. If it is non-empty, then from Condition \ref{con} (2)
\begin{align}\label{11}
&\#(\mathcal{X}_{\mathcal{J}}, (\tilde{g}_{n+1}\circ \pi_{n+1})^{*}\|\cdot\|_{N_{n+1}}, \varphi_{N_{n+1}}, \epsilon)\\& \nonumber <\left\lbrace \left( \dfrac{1}{\epsilon}\right) ^{(D+\frac{3}{n}))N_{n}} \right\rbrace^{|\mathcal{J}|/N_{n}}\cdot \left\lbrace \left( \dfrac{1}{\epsilon}\right) ^{C_{n}+\|\varphi_{1}\|_{\infty}+K}\right\rbrace^{|[N_{n+1}]\setminus \mathcal{J}|}.
\end{align}
We have $|\mathcal{J}|\leq N_{n+1}$ and
\begin{align*}
|[N_{n+1}]\setminus \mathcal{J}||&\leq 2N_{n}|\partial_{\psi_{n}}(x)\cap[0, N_{n+1}]|+2N_{n}\\&<\dfrac{N_{n+1}}{n(C_{n}+\|\varphi_{1}\|_{\infty}+K)} ~\text{by the condition (b) of the choice of } N_{n+1}.
\end{align*}
Then the above $(\ref{11})$ is bounded by
$$\left( \dfrac{1}{\epsilon}\right)^{(D+\frac{3}{n})N_{n+1}+\frac{N_{n+1}}{n}}= \left( \dfrac{1}{\epsilon}\right)^{(D+\frac{4}{n})N_{n+1}}.$$
The number of the choices of $\mathcal{J}\subset [N_{n+1}]$ is bounded by $2^{N_{n+1}}$. Thus
$$\#(\mathcal{X},(\tilde{g}_{n+1}\circ \pi_{n+1})^{*}\|\cdot\|_{N_{n+1}}, \varphi_{N_{n+1}}, \epsilon)<2^{N_{n+1}}\left( \dfrac{1}{\epsilon}\right)^{(D+\frac{4}{n})N_{n+1}}.$$ \end{proof} From Lemma \ref{key} (1), we can take $0<\epsilon_{n+1}<\epsilon_{n}/2$ such that for any $0<\epsilon\leq \epsilon_{n+1}$ and any linear map $g: P_{n+1} \rightarrow V^{N_{n}+1}$ with $g(P_{n+1}) \subset B_{1}^{\circ}(V)^{N_{n+1}}$ \begin{align*}
\#(g(\Delta), \|\cdot\|_{N_{n+1}}, \epsilon)<\dfrac{1}{\left( \text{Number of simplexes of } P_{n+1}\right) } \left( \dfrac{1}{\epsilon}\right)^{{\rm dim}(\Delta)+\frac{1}{n+1}} \end{align*} for all simplexes $\Delta \subset P_{n+1}$.
Let $g: P_{n+1}\rightarrow B_{1}^{\circ}(V)^{N_{n+1}}$ be a linear map and let $\Delta \subset P_{n+1}$ be a simplex. Since $\pi_{n+1}$ is essential, we can find a point $x\in \pi_{n+1}^{-1}(O_{P_{n+1}})$ with ${\rm dim}_{\pi_{n+1}(x)}P_{n+1}\geq {\rm dim}(\Delta).$ From (\ref{pi}) and the condition (ii) of the choice of $\tau_{n+1}$ $$\sup\limits_{\pi_{n+1}^{-1}(O_{P_{n+1}}(\Delta))} \varphi_{N_{n+1}} \leq \varphi_{N_{n+1}}(x)+\dfrac{N_{n+1}}{n+1}.$$ Then for $0<\epsilon\leq \epsilon_{n+1}$ \begin{align*}
&\left( \dfrac{1}{\epsilon}\right)^{\sup\limits_{\pi_{n+1}^{-1}(O_{P_{n+1}}(\Delta))}\varphi_{N_{n+1}}}\#(g(\Delta), \|\cdot\|_{N_{n+1}}, \epsilon)\\&<\dfrac{1}{\left( \text{Number of simplexes of } P_{n+1}\right) } \left( \dfrac{1}{\epsilon}\right)^{\varphi_{N_{n+1}}(x)+{\rm dim}(\Delta)+\frac{N_{n+1}}{n+1}+\frac{1}{n+1}}\\&\leq \dfrac{1}{\left( \text{Number of simplexes of } P_{n+1}\right) } \left( \dfrac{1}{\epsilon}\right)^{\varphi_{N_{n+1}}(x)+{\rm dim}_{\pi_{n+1}(x)}P_{n+1}+\frac{N_{n+1}+1}{n+1}}\\&\leq\dfrac{1}{\left( \text{Number of simplexes of } P_{n+1}\right) } \left( \dfrac{1}{\epsilon}\right)^{(D+\frac{1}{n+1})N_{n+1}+\frac{N_{n+1}+1}{n+1}}~~~\text{by} ~(\ref{cc})\\&\leq \dfrac{1}{\left( \text{Number of simplexes of } P_{n+1}\right)}\left(\dfrac{1}{\epsilon} \right)^{(D+\frac{3}{n+1})N_{n+1}}. \end{align*} Hence for any $0<\epsilon<\epsilon_{n+1}$ and any linear map $g: P_{n+1}\rightarrow B_{1}^{\circ}(V)^{N_{n+1}}$ \begin{align}\label{w}
\sum\limits_{\Delta \subset P_{n+1}}\left( \dfrac{1}{\epsilon}\right)^{\sup\limits_{\pi_{n+1}^{-1}(O_{P_{n+1}}(\Delta))}\varphi_{N_{n+1}} }\#(g(\Delta), \|\cdot\|_{N_{n+1}}, \epsilon)<\left(\dfrac{1}{\epsilon} \right)^{(D+\frac{3}{n+1})N_{n+1}}. \end{align} We define $g'_{n+1}: Q_{n+1} \rightarrow B_{1}^{\circ}(V)$ and $g_{n+1, \lambda}: P_{n+1} \rightarrow B_{1}^{\circ}(V)~(\lambda \in [N_{n+1}])$ as small perturbations of $\tilde{g}'_{n+1}$ and $\tilde{g}_{n+1, \lambda}$ respectively. By Lemma \ref{key} (2), we can assume that they satisfy Condition \ref{con} (1). From $(\ref{per2})$ and $(\ref{per1})$ we can assume that the perturbations are so small that they satisfy \begin{align}\label{f}
&\|g_{n+1}'(\pi_{n+1}(x)- f_{n}(x))\|<\min (\dfrac{\epsilon_{n}}{8}, \dfrac{\delta_{n}}{2}), \\&
\|g_{n+1, \lambda}(\pi_{n+1}(x))-f_{n}(T^{\lambda}x)\|<\min(\dfrac{\epsilon_{n}}{8}, \dfrac{\delta_{n}}{2}). \end{align} Moreover, from Claim \ref{num2}, we can assume that $g_{n+1}:=(g_{n+1, 0}, \cdots, g_{n+1, N_{n+1}-1})$ satisfies
$$\#(\mathcal{X}, ({g}_{n+1}\circ \pi_{n+1})^{*}\|\cdot\|_{N_{n+1}}, \varphi_{N_{n+1}}, \epsilon)< 2^{N_{n+1}}\left( \dfrac{1}{\epsilon} \right)^{(D+\frac{4}{n})N_{n+1}}$$ for all $\epsilon_{n+1}\leq \epsilon \leq \epsilon_{n}.$ On the other hand, from (\ref{w}), for $0<\epsilon\leq\epsilon_{n+1}$
$$\sum\limits_{\Delta \subset P_{n+1}}\left( \dfrac{1}{\epsilon}\right)^{\sup\limits_{\pi_{n+1}^{-1}(O_{P_{n+1}}(\Delta))}\varphi_{N_{n+1}} }\#(g_{n+1}(\Delta), \|\cdot\|_{N_{n+1}}, \epsilon)<\left(\dfrac{1}{\epsilon} \right)^{(D+\frac{3}{n+1})N_{n+1}}.$$ Thus we have established Condition \ref{con} (2) and (3) for $(n+1)$-th step. From Lemma \ref{tiling}, we can take a continuous function $\psi_{n+1}: \mathcal{X}\rightarrow [0,1]$ satisfying Condition \ref{con} (4). The map $f_{n+1}$ defined by $(\ref{5.6})$ is a $(1/n)$-embedding with respect to $d$ by Condition \ref{con} (1). Since $(1/n)$-embedding is an open condition, we can take $\delta_{n+1}>0$ satisfying Condition \ref{con} (5). From $(\ref{f})$, \begin{align}
\|f_{n+1}(x)- f_{n}(x)\|<\min\left( \dfrac{\epsilon_{n}}{8}, \dfrac{\delta_{n}}{2} \right). \end{align} This shows Condition \ref{con} (6). We have finished the constructed of all data for the $(n+1)$-th step. \end{proof}
{\bf Acknowledgements.} The first and second author were supported by NNSF of China (11671208 and 11431012). We would like to express our gratitude to Tianyuan Mathematical Center in Southwest China, Sichuan University and Southwest Jiaotong University for their support and hospitality.
\end{document} | arXiv |
In algebraic topology we often encounter chain complexes with extra multiplicative structure. For example, the cochain complex of a topological space has what is called the $E_\infty$-algebra structure which comes from the cup product.
In this talk I present an idea for studying such chain complexes, $E_\infty$ differential graded algebras ($E_\infty$ DGAs), using stable homotopy theory. Namely, I discuss new equivalences between $E_\infty$ DGAS that are defined using commutative ring spectra.
ring spectra are equivalent. Quasi-isomorphic $E_\infty$ DGAs are $E_\infty$ topologically equivalent. However, the examples I am going to present show that the opposite is not true; there are $E_\infty$ DGAs that are $E_\infty$ topologically equivalent but not quasi-isomorphic. This says that between $E_\infty$ DGAs, we have more equivalences than just the quasi-isomorphisms.
I also discuss interaction of $E_\infty$ topological equivalences with the Dyer-Lashof operations and cases where $E_\infty$topological equivalences and quasi-isomorphisms agree. | CommonCrawl |
Chemistry and Chemical Engineering (4)
From 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857 — To 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857
https://akjournals.com/search?access=all&f_0=author&pageSize=10&q_0=Caicai+Zhang&sort=relevance&t=Chemistry&print&print
Author or Editor: Caicai Zhang x
Chemistry and Chemical Engineering x
Sort by RelevanceArticle A-ZArticle Z-ADate - Old to RecentDate - Recent to OldAuthor A-ZAuthor Z-AJournal A-ZJournal Z-A
Sorption of Th(IV) on MX-80 bentonite: effect of pH and modeling
Authors: Songsheng Lu, Zhiqiang Guo, Caicai Zhang, and Shouwei Zhang
MX-80 bentonite was characterized by XRD and FTIR in detail. The sorption of Th(IV) on MX-80 bentonite was studied as a function of pH and ionic strength in the presence and absence of humic acid/fulvic acid. The results indicate that the sorption of Th(IV) on MX-80 bentonite increases from 0 to 95% at pH range of 0–4, and then maintains high level with increasing pH values. The sorption of Th(IV) on bentonite decreases with increasing ionic strength. The diffusion layer model (DLM) is applied to simulate the sorption of Th(IV) with the aid of FITEQL 3.1 mode. The species of Th(IV) adsorbed on bare MX-80 bentonite are consisted of "strong" species
\documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\equiv {\text{YOHTh}}^{4 + }$$ \end{document}
at low pH and "weak" species
\documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\equiv {\text{XOTh(OH)}}_{3}$$ \end{document}
at pH > 4. On HA bound MX-80 bentonite, the species of Th(IV) adsorbed on HA-bentonite hybrids are mainly consisted of
\documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\equiv {\text{YOThL}}_{3}$$ \end{document}
\documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\equiv {\text{XOThL}}_{1}$$ \end{document}
at pH < 4, and
at pH > 4. Similar species of Th(IV) adsorbed on FA bound MX-80 bentonite are observed as on FA bound MX-80 bentonite. The sorption isotherm is simulated by Langmuir, Freundlich and Dubinin–Radushkevich (D–R) models, respectively. The sorption mechanism of Th(IV) on MX-80 bentonite is discussed in detail.
Influence of pH, humic acid, ionic strength, foreign ions, and temperature on 60Co(II) sorption onto γ-Al2O3
Authors: Caicai Zhang, Zhengjie Liu, Lei Chen, and Yunhui Dong
The sorption of 60Co(II) on γ-Al2O3 was conducted under various conditions, i.e., contact time, adsorbent content, pH, ionic strength, foreign ions, humic acid (HA), and temperature. Results of sorption data analysis indicated that the sorption of 60Co(II) on γ-Al2O3 was strongly dependent on pH and ionic strength. At low pH the sorption was dominated by outer-sphere surface complexation or ion exchange, whereas inner-sphere surface complexation was the main sorption mechanism at high pH. The presence of different cation ions influenced 60Co(II) sorption, while the presence of different anion ions had no obvious influences on 60Co(II) sorption. The presence of HA decreased the sorption of 60Co(II) on γ-Al2O3. The sorption isotherms were simulated well with the Langmuir model. The thermodynamic parameters (ΔH 0, ΔS 0 and ΔG 0) calculated from the temperature-dependent sorption isotherms indicated that the sorption of 60Co(II) on γ-Al2O3 was an endothermic and spontaneous process. Experimental results indicated that the low cost material was a suitable material in the preconcentration of 60Co(II) from large volumes of aqueous solutions.
Impact of environmental conditions on sorption of 210Pb(II) to NKF-5 zeolite
Authors: Donglin Zhao, Caicai Zhang, Junzheng Xu, and Zhiwei Niu
The sorption of Pb(II) from aqueous solution using NKF-5 zeolite was investigated by batch technique under ambient conditions. The NKF-5 zeolite sample was characterized by using FTIR and X-ray powder diffraction in detail. The sorption of Pb(II) was investigated as a function of pH, ionic strength, foreign ions, and humic substances. The results indicated that the sorption of Pb(II) on NKF-5 zeolite was strongly dependent on pH. The sorption was dependent on ionic strength at low pH, but independent of ionic strength at high pH. At low pH, the sorption of Pb(II) was dominated by outer-sphere surface complexation and ion exchange with H+ on NKF-5 zeolite surfaces, whereas inner-sphere surface complexation was the main sorption mechanism at high pH. From the experimental results, one can conclude that NKF-5 zeolite has good potentialities for cost-effective preconcentration of Pb(II) from large volumes of aqueous solutions.
Adsorption and desorption of radionuclide europium(III) on multiwalled carbon nanotubes studied by batch techniques
Authors: Songsheng Lu, Junzheng Xu, Caicai Zhang, and Zhiwei Niu
The adsorption of Eu(III) on multiwalled carbon nanotubes (MWCNTs) as a function of pH, ionic strength and solid contents are studied by batch technique. The results indicate that the adsorption of Eu(III) on MWCNTs is strongly dependent on pH values, dependent on ionic strength at low pH values and independent of ionic strength at high pH values. Strong surface complexation and ion exchange contribute to the adsorption of Eu(III) on MWCNTs at low pH values, whereas surface complexation and surface precipitation are the main adsorption mechanism of Eu(III) on MWCNTs. The desorption of adsorbed Eu(III) from MWCNTs by adding HCl is also studied and the recycling use of MWCNTs in the removal of Eu(III) is investigated after the desorption of Eu(III) at low pH values. The results indicate that adsorbed Eu(III) can be easily desorbed from MWCNTs at low pH values, and MWCNTs can be repeatedly used to remove Eu(III) from aqueous solutions. MWCNTs are suitable material in the preconcentration and solidification of radionuclides from large volumes of aqueous solutions in nuclear waste management. | CommonCrawl |
Count On
Count On is a major mathematics education project in the United Kingdom which was announced by education secretary David Blunkett at the end of 2000. It was the follow-on to Maths Year 2000 which was the UK's contribution to UNICEF's World Mathematical Year.[1]
Count On had two main strands:
• The website www.counton.org[2] which won the 2002 BETT prize for best free online learning resource.[3]
• "MathFests", which were maths funfairs held around the country, aimed particularly at those who would not normally come into contact with mathematical ideas.[4]
The MathFests were run largely by MatheMagic and the University of York.
The project has now been handed over to the NCETM.
Popularisation of Mathematics
Count On and Maths Year 2000 were some of the first big Popularisation of Mathematics projects. Others are listed below.
International
• World Mathematical Year 2000
• Statistics 2013
• World Maths Day (orig. Australian) - next one is 6 March 2013
Australia
• World Maths Day
India
• National Mathematics Year
Ireland
• Maths Week Ireland
Nigeria
• National Mathematics Year
Spain
• Matematica Vital
• Paul Boron
United Kingdom
• Maths Year 2000 Scotland
• Maths Cymru (Wales)
United States
• Steven Strogatz's blog
References
1. English pupils lag behind in maths, BBC News, 5 December 2000.
2. "My Media: Kate Scarborough", The Guardian, 31 July 2006.
3. "Deputy logs on to £100,000", Times Educational Supplement, 18 January 2002, archived from the original on 5 October 2012, retrieved 24 July 2011.
4. "No doubt about it - we're addicted to maths", Times Educational Supplement, 19 January 2001, archived from the original on 5 October 2012, retrieved 24 July 2011.
Mathematics in the United Kingdom
Organizations and Projects
• International Centre for Mathematical Sciences
• Advisory Committee on Mathematics Education
• Association of Teachers of Mathematics
• British Society for Research into Learning Mathematics
• Council for the Mathematical Sciences
• Count On
• Edinburgh Mathematical Society
• HoDoMS
• Institute of Mathematics and its Applications
• Isaac Newton Institute
• United Kingdom Mathematics Trust
• Joint Mathematical Council
• Kent Mathematics Project
• London Mathematical Society
• Making Mathematics Count
• Mathematical Association
• Mathematics and Computing College
• Mathematics in Education and Industry
• Megamaths
• Millennium Mathematics Project
• More Maths Grads
• National Centre for Excellence in the Teaching of Mathematics
• National Numeracy
• National Numeracy Strategy
• El Nombre
• Numbertime
• Oxford University Invariant Society
• School Mathematics Project
• Science, Technology, Engineering and Mathematics Network
• Sentinus
Maths schools
• Exeter Mathematics School
• King's College London Mathematics School
• Lancaster University School of Mathematics
• University of Liverpool Mathematics School
Journals
• Compositio Mathematica
• Eureka
• Forum of Mathematics
• Glasgow Mathematical Journal
• The Mathematical Gazette
• Philosophy of Mathematics Education Journal
• Plus Magazine
Competitions
• British Mathematical Olympiad
• British Mathematical Olympiad Subtrust
• National Cipher Challenge
Awards
• Chartered Mathematician
• Smith's Prize
• Adams Prize
• Thomas Bond Sprague Prize
• Rollo Davidson Prize
| Wikipedia |
Feedback arc set
In graph theory and graph algorithms, a feedback arc set or feedback edge set in a directed graph is a subset of the edges of the graph that contains at least one edge out of every cycle in the graph. Removing these edges from the graph breaks all of the cycles, producing a directed acyclic graph, an acyclic subgraph of the given graph. The feedback arc set with the fewest possible edges is the minimum feedback arc set and its removal leaves the maximum acyclic subgraph; weighted versions of these optimization problems are also used. If a feedback arc set is minimal, meaning that removing any edge from it produces a subset that is not a feedback arc set, then it has an additional property: reversing all of its edges, rather than removing them, produces a directed acyclic graph.
Feedback arc sets have applications in circuit analysis, chemical engineering, deadlock resolution, ranked voting, ranking competitors in sporting events, mathematical psychology, ethology, and graph drawing. Finding minimum feedback arc sets and maximum acyclic subgraphs is NP-hard; it can be solved exactly in exponential time, or in fixed-parameter tractable time. In polynomial time, the minimum feedback arc set can be approximated to within a polylogarithmic approximation ratio, and maximum acyclic subgraphs can be approximated to within a constant factor. Both are hard to approximate closer than some constant factor, an inapproximability result that can be strengthened under the unique games conjecture. For tournament graphs, the minimum feedback arc set can be approximated more accurately, and for planar graphs both problems can be solved exactly in polynomial time.
A closely related problem, the feedback vertex set, is a set of vertices containing at least one vertex from every cycle in a directed or undirected graph. In undirected graphs, the spanning trees are the largest acyclic subgraphs, and the number of edges removed in forming a spanning tree is the circuit rank.
Applications
Several problems involving finding rankings or orderings can be solved by finding a feedback arc set on a tournament graph, a directed graph with one edge between each pair of vertices. Reversing the edges of the feedback arc set produces a directed acyclic graph whose unique topological order can be used as the desired ranking. Applications of this method include the following:
• In sporting competitions with round-robin play, the outcomes of each game can be recorded by directing an edge from the loser to the winner of each game. Finding a minimum feedback arc set in the resulting graph, reversing its edges, and topological ordering, produces a ranking on all of the competitors. Among all of the different ways of choosing a ranking, it minimizes the total number of upsets, games in which a lower-ranked competitor beat a higher-ranked competitor.[2][3][4] Many sports use simpler methods for group tournament ranking systems based on points awarded for each game;[5] these methods can provide a constant approximation to the minimum-upset ranking.[6]
• In primatology and more generally in ethology, dominance hierarchies are often determined by searching for an ordering with the fewest reversals in observed dominance behavior, another form of the minimum feedback arc set problem.[7][8][9]
• In mathematical psychology, it is of interest to determine subjects' rankings of sets of objects according to a given criterion, such as their preference or their perception of size, based on pairwise comparisons between all pairs of objects. The minimum feedback arc set in a tournament graph provides a ranking that disagrees with as few pairwise outcomes as possible.[2][10] Alternatively, if these comparisons result in independent probabilities for each pairwise ordering, then the maximum likelihood estimation of the overall ranking can be obtained by converting these probabilities into log-likelihoods and finding a minimum-weight feedback arc set in the resulting tournament.[2][3]
• The same maximum-likelihood ordering can be used for seriation, the problem in statistics and exploratory data analysis of arranging elements into a linear ordering, in cases where data is available that provides pairwise comparisons between the elements.[3][11][12]
• In ranked voting, the Kemeny–Young method can be described as seeking an ordering that minimizes the sum, over pairs of candidates, of the number of voters who prefer the opposite ordering for that pair.[13] This can be formulated and solved as a minimum-weight feedback arc set problem, in which the vertices represent candidates, edges are directed to represent the winner of each head-to-head contest, and the cost of each edge represents the number of voters who would be made unhappy by giving a higher ranking to the head-to-head loser.[14]
Another early application of feedback arc sets concerned the design of sequential logic circuits, in which signals can propagate in cycles through the circuit instead of always progressing from inputs to outputs. In such circuits, a minimum feedback arc set characterizes the number of points at which amplification is necessary to allow the signals to propagate without loss of information.[15] In synchronous circuits made from asynchronous components, synchronization can be achieved by placing clocked gates on the edges of a feedback arc set.[16] Additionally, cutting a circuit on a feedback arc a set reduces the remaining circuit to combinational logic, simplifying its analysis, and the size of the feedback arc set controls how much additional analysis is needed to understand the behavior of the circuit across the cut.[15] Similarly, in process flowsheeting in chemical engineering, breaking edges of a process flow diagram on a feedback arc set, and guessing or trying all possibilities for the values on those edges, allows the rest of the process to be analyzed in a systematic way because of its acyclicity. In this application, the idea of breaking edges in this way is called "tearing".[17]
In layered graph drawing, the vertices of a given directed graph are partitioned into an ordered sequence of subsets (the layers of the drawing), and each subset is placed along a horizontal line of this drawing, with the edges extending upwards and downwards between these layers. In this type of drawing, it is desirable for most or all of the edges to be oriented consistently downwards, rather than mixing upwards and downwards edges, in order for the reachability relations in the drawing to be more visually apparent. This is achieved by finding a minimum or minimal feedback arc set, reversing the edges in that set, and then choosing the partition into layers in a way that is consistent with a topological order of the resulting acyclic graph.[18][19] Feedback arc sets have also been used for a different subproblem of layered graph drawing, the ordering of vertices within consecutive pairs of layers.[20]
In deadlock resolution in operating systems, the problem of removing the smallest number of dependencies to break a deadlock can be modeled as one of finding a minimum feedback arc set.[21][22] However, because of the computational difficulty of finding this set, and the need for speed within operating system components, heuristics rather than exact algorithms are often used in this application.[22]
Algorithms
Equivalences
The minimum feedback arc set and maximum acyclic subgraph are equivalent for the purposes of exact optimization, as one is the complement set of the other. However, for parameterized complexity and approximation, they differ, because the analysis used for those kinds of algorithms depends on the size of the solution and not just on the size of the input graph, and the minimum feedback arc set and maximum acyclic subgraph have different sizes from each other.[23]
A feedback arc set of a given graph $G$ is the same as a feedback vertex set of a directed line graph of $G$. Here, a feedback vertex set is defined analogously to a feedback arc set, as a subset of the vertices of the graph whose deletion would eliminate all cycles. The line graph of a directed graph $G$ has a vertex for each edge of $G$, and an edge for each two-edge path in $G$. In the other direction, the minimum feedback vertex set of a given graph $G$ can be obtained from the solution to a minimum feedback arc set problem on a graph obtained by splitting every vertex of $G$ into two vertices, one for incoming edges and one for outgoing edges. These transformations allow exact algorithms for feedback arc sets and for feedback vertex sets to be converted into each other, with an appropriate translation of their complexity bounds. However, this transformation does not preserve approximation quality for the maximum acyclic subgraph problem.[21][24]
In both exact and approximate solutions to the feedback arc set problem, it is sufficient to solve separately each strongly connected component of the given graph, and to break these strongly connected components down even farther to their biconnected components by splitting them at articulation vertices. The choice of solution within any one of these subproblems does not affect the others, and the edges that do not appear in any of these components are useless for inclusion in the feedback arc set.[25] When one of these components can be separated into two disconnected subgraphs by removing two vertices, a more complicated decomposition applies, allowing the problem to be split into subproblems derived from the triconnected components of its strongly connected components.[26]
Exact
One way to find the minimum feedback arc set is to search for an ordering of the vertices such that as few edges as possible are directed from later vertices to earlier vertices in the ordering.[27] Searching all permutations of an $n$-vertex graph would take time $O(n!)$, but a dynamic programming method based on the Held–Karp algorithm can find the optimal permutation in time $O(n2^{n})$, also using an exponential amount of space.[28][29] A divide-and-conquer algorithm that tests all partitions of the vertices into two equal subsets and recurses within each subset can solve the problem in time $O(4^{n}/{\sqrt {n}})$, using polynomial space.[29]
In parameterized complexity, the time for algorithms is measured not just in terms of the size of the input graph, but also in terms of a separate parameter of the graph. In particular, for the minimum feedback arc set problem, the so-called natural parameter is the size of the minimum feedback arc set. On graphs with $n$ vertices, with natural parameter $k$, the feedback arc set problem can be solved in time $O(n^{4}4^{k}k^{3}k!)$, by transforming it into an equivalent feedback vertex set problem and applying a parameterized feedback vertex set algorithm. Because the exponent of $n$ in this algorithm is the constant $4$, independent of $k$, this algorithm is said to be fixed-parameter tractable.[30]
Other parameters than the natural parameter have also been studied. A fixed-parameter tractable algorithm using dynamic programming can find minimum feedback arc sets in time $O(2^{r}m^{4}\log m)$, where $r$ is the circuit rank of the underlying undirected graph. The circuit rank is an undirected analogue of the feedback arc set, the minimum number of edges that need to be removed from a graph to reduce it to a spanning tree; it is much easier to compute than the minimum feedback arc set.[24] For graphs of treewidth $t$, dynamic programming on a tree decomposition of the graph can find the minimum feedback arc set in time polynomial in the graph size and exponential in $O(t\log t)$. Under the exponential time hypothesis, no better dependence on $t$ is possible.[31]
Instead of minimizing the size of the feedback arc set, researchers have also looked at minimizing the maximum number of edges removed from any vertex. This variation of the problem can be solved in linear time.[32] All minimal feedback arc sets can be listed by an algorithm with polynomial delay per set.[33]
Approximate
Unsolved problem in mathematics:
Does the feedback arc set problem have an approximation algorithm with a constant approximation ratio?
(more unsolved problems in mathematics)
The best known polynomial-time approximation algorithm for the feedback arc set has the non-constant approximation ratio $O(\log n\log \log n)$. This means that the size of the feedback arc set that it finds is at most this factor larger than the optimum.[21] Determining whether feedback arc set has a constant-ratio approximation algorithm, or whether a non-constant ratio is necessary, remains an open problem.[34]
The maximum acyclic subgraph problem has an easy approximation algorithm that achieves an approximation ratio of ${\tfrac {1}{2}}$:
• Fix an arbitrary ordering of the vertices
• Partition the edges into two acyclic subgraphs, one consisting of the edges directed consistently with the ordering, and the other consisting of edges directed oppositely to the ordering.
• Return the larger of the two subgraphs.
This can be improved by using a greedy algorithm to choose the ordering. This algorithm finds and deletes a vertex whose numbers of incoming and outgoing edges are as far apart as possible, recursively orders the remaining graph, and then places the deleted vertex at one end of the resulting order. For graphs with $m$ edges and $n$ vertices, this produces an acyclic subgraph with $m/2+n/6$ edges, in linear time, giving an approximation ratio of ${\tfrac {1}{2}}+\Omega (n/m)$.[35] Another, more complicated, polynomial time approximation algorithm applies to graphs with maximum degree $\Delta $, and finds an acyclic subgraph with $m/2+\Omega (m/{\sqrt {\Delta }})$ edges, giving an approximation ratio of the form ${\tfrac {1}{2}}+\Omega (1/{\sqrt {\Delta }})$.[36][37] When $\Delta =3$, the approximation ratio $8/9$ can be achieved.[38]
Restricted inputs
In directed planar graphs, the feedback arc set problem is dual to the problem of contracting a set of edges (a dijoin) to make the resulting graph strongly connected.[39] This dual problem is polynomially solvable,[40] and therefore the planar minimum feedback arc set problem is as well.[41][40] It can be solved in time $O(n^{5/2}\log n)$.[42] A weighted version of the problem can be solved in time $O(n^{3})$,[39] or when the weights are positive integers that are at most a number $N$, in time $O(n^{5/2}\log nN)$.[42] These planar algorithms can be extended to the graphs that do not have the utility graph $K_{3,3}$ as a graph minor, using the fact that the triconnected components of these graphs are either planar or of bounded size.[26] Planar graphs have also been generalized in a different way to a class of directed graphs called weakly acyclic digraphs, defined by the integrality of a certain polytope associated with their feedback arc sets. Every planar directed graph is weakly acyclic in this sense, and the feedback arc set problem can be solved in polynomial time for all weakly acyclic digraphs.[43]
The reducible flow graphs are another class of directed graphs on which the feedback arc set problem may be solved in polynomial time. These graphs describe the flow of control in structured programs for many programming languages. Although structured programs often produce planar directed flow graphs, the definition of reducibility does not require the graph to be planar.[44]
When the minimum feedback arc set problem is restricted to tournaments, it has a polynomial-time approximation scheme, which generalizes to a weighted version of the problem.[45] A subexponential parameterized algorithm for weighted feedback arc sets on tournaments is also known.[14] The maximum acyclic subgraph problem for dense graphs also has a polynomial-time approximation scheme. Its main ideas are to apply randomized rounding to a linear programming relaxation of the problem, and to derandomize the resulting algorithm using walks on expander graphs.[34][46]
Hardness
NP-hardness
In order to apply the theory of NP-completeness to the minimum feedback arc set, it is necessary to modify the problem from being an optimization problem (how few edges can be removed to break all cycles) to an equivalent decision version, with a yes or no answer (is it possible to remove $k$ edges). Thus, the decision version of the feedback arc set problem takes as input both a directed graph and a number $k$. It asks whether all cycles can be broken by removing at most $k$ edges, or equivalently whether there is an acyclic subgraph with at least $|E(G)|-k$ edges. This problem is NP-complete, implying that neither it nor the optimization problem are expected to have polynomial time algorithms. It was one of Richard M. Karp's original set of 21 NP-complete problems; its NP-completeness was proved by Karp and Eugene Lawler by showing that inputs for another hard problem, the vertex cover problem, could be transformed ("reduced") into equivalent inputs to the feedback arc set decision problem.[47][48]
Some NP-complete problems can become easier when their inputs are restricted to special cases. But for the most important special case of the feedback arc set problem, the case of tournaments, the problem remains NP-complete.[49][50]
Inapproximability
The complexity class APX is defined as consisting of optimization problems that have a polynomial time approximation algorithm that achieves a constant approximation ratio. Although such approximations are not known for the feedback arc set problem, the problem is known to be APX-hard, meaning that accurate approximations for it could be used to achieve similarly accurate approximations for all other problems in APX. As a consequence of its hardness proof, unless P = NP, it has no polynomial time approximation ratio better than 1.3606. This is the same threshold for hardness of approximation that is known for vertex cover, and the proof uses the Karp–Lawler reduction from vertex cover to feedback arc set, which preserves the quality of approximations.[34][51][52][53] By a different reduction, the maximum acyclic subgraph problem is also APX-hard, and NP-hard to approximate to within a factor of 65/66 of optimal.[38]
The hardness of approximation of these problems has also been studied under unproven computational hardness assumptions that are standard in computational complexity theory but stronger than P ≠ NP. If the unique games conjecture is true, then the minimum feedback arc set problem is hard to approximate in polynomial time to within any constant factor, and the maximum feedback arc set problem is hard to approximate to within a factor of ${\tfrac {1}{2}}+\varepsilon $, for every $\varepsilon >0$.[54] Beyond polynomial time for approximation algorithms, if the exponential time hypothesis is true, then for every $\varepsilon >0$ the minimum feedback arc set does not have an approximation within a factor ${\tfrac {7}{6}}-\varepsilon $ that can be computed in the subexponential time bound $O(2^{n^{1-\varepsilon }})$.[55]
Theory
In planar directed graphs, the feedback arc set problem obeys a min-max theorem: the minimum size of a feedback arc set equals the maximum number of edge-disjoint directed cycles that can be found in the graph.[41][56] This is not true for some other graphs; for instance the first illustration shows a directed version of the non-planar graph $K_{3,3}$ in which the minimum size of a feedback arc set is two, while the maximum number of edge-disjoint directed cycles is only one.
Every tournament graph has a Hamiltonian path, and the Hamiltonian paths correspond one-for-one with minimal feedback arc sets, disjoint from the corresponding path. The Hamiltonian path for a feedback arc set is found by reversing its arcs and finding a topological order of the resulting acyclic tournament. Every consecutive pair of the order must be disjoint from the feedback arc sets, because otherwise one could find a smaller feedback arc set by reversing that pair. Therefore, this ordering gives a path through arcs of the original tournament, covering all vertices. Conversely, from any Hamiltonian path, the set of edges that connect later vertices in the path to earlier ones forms a feedback arc set. It is minimal, because each of its edges belongs to a cycle with the Hamiltonian path edges that is disjoint from all other such cycles.[57] In a tournament, it may be the case that the minimum feedback arc set and maximum acyclic subgraph are both close to half the edges. More precisely, every tournament graph has a feedback arc set of size ${\tbinom {n}{2}}/2-\Omega (n^{3/2})$, and some tournaments require size ${\tbinom {n}{2}}/2-O(n^{3/2})$.[58] For almost all tournaments, the size is at least ${\tbinom {n}{2}}/2-1.73n^{3/2}$.[59] Every directed acyclic graph $D$ can be embedded as a subgraph of a larger tournament graph, in such a way that $D$ is the unique minimum feedback arc set of the tournament. The size of this tournament has been defined as the "reversing number" of $D$, and among directed acyclic graphs with the same number of vertices it is largest when $D$ is itself an (acyclic) tournament.[60][61]
A directed graph has an Euler tour whenever it is strongly connected and each vertex has equal numbers of incoming and outgoing edges. For such a graph, with $m$ edges and $n$ vertices, the size of a minimum feedback arc set is always at least $(m^{2}+mn)/2n^{2}$. There are infinitely many Eulerian directed graphs for which this bound is tight.[62] If a directed graph has $n$ vertices, with at most three edges per vertex, then it has a feedback arc set of at most $n/3$ edges, and some graphs require this many. If a directed graph has $m$ edges, with at most four edges per vertex, then it has a feedback arc set of at most $m/3$ edges, and some graphs require this many.[63]
References
1. "Main draw – Men", Rio 2016, Fédération Internationale de Volleyball, archived from the original on 2016-12-23, retrieved 2021-11-14
2. Hubert, Lawrence (1976), "Seriation using asymmetric proximity measures", British Journal of Mathematical and Statistical Psychology, 29 (1): 32–52, doi:10.1111/j.2044-8317.1976.tb00701.x, MR 0429180
3. Remage, Russell Jr.; Thompson, W. A. Jr. (1966), "Maximum-likelihood paired comparison rankings", Biometrika, 53 (1–2): 143–149, doi:10.1093/biomet/53.1-2.143, JSTOR 2334060, MR 0196854, PMID 5964054
4. Goddard, Stephen T. (1983), "Ranking in tournaments and group decisionmaking", Management Science, 29 (12): 1384–1392, doi:10.1287/mnsc.29.12.1384, MR 0809110; note that the algorithm suggested by Goddard for finding minimum-violation rankings is incorrect
5. Vaziri, Baback; Dabadghao, Shaunak; Yih, Yuehwern; Morin, Thomas L. (January 2018), "Properties of sports ranking methods" (PDF), Journal of the Operational Research Society, 69 (5): 776–787, doi:10.1057/s41274-017-0266-8, S2CID 51887586
6. Coppersmith, Don; Fleischer, Lisa K.; Rurda, Atri (2010), "Ordering by weighted number of wins gives a good ranking for weighted tournaments", ACM Transactions on Algorithms, 6 (3): A55:1–A55:13, doi:10.1145/1798596.1798608, MR 2682624, S2CID 18416
7. Seyfarth, Robert M. (November 1976), "Social relationships among adult female baboons", Animal Behaviour, 24 (4): 917–938, doi:10.1016/s0003-3472(76)80022-x, S2CID 54284406
8. Estep, D.Q.; Crowell-Davis, S.L.; Earl-Costello, S.-A.; Beatey, S.A. (January 1993), "Changes in the social behaviour of drafthorse (Equus caballus) mares coincident with foaling", Applied Animal Behaviour Science, 35 (3): 199–213, doi:10.1016/0168-1591(93)90137-e
9. Eickwort, George C. (April 2019), "Dominance in a cockroach (Nauphoeta)", Insect Behavior, CRC Press, pp. 120–126, doi:10.1201/9780429049262-18, S2CID 203898549
10. Slater, Patrick (1961), "Inconsistencies in a schedule of paired comparisons", Biometrika, 48 (3–4): 303–312, doi:10.1093/biomet/48.3-4.303, JSTOR 2332752
11. Brunk, H. D. (1960), "Mathematical models for ranking from paired comparisons", Journal of the American Statistical Association, 55 (291): 503–520, doi:10.2307/2281911, hdl:2027/mdp.39015095254010, JSTOR 2281911, MR 0115242
12. Thompson, W. A., Jr.; Remage, Russell, Jr. (1964), "Rankings from paired comparisons", Annals of Mathematical Statistics, 35 (2): 739–747, doi:10.1214/aoms/1177703572, JSTOR 2238526, MR 0161419{{citation}}: CS1 maint: multiple names: authors list (link)
13. Kemeny, John G. (Fall 1959), "Mathematics without numbers", Daedalus, 88 (4): 577–591, JSTOR 20026529
14. Karpinski, Marek; Schudy, Warren (2010), "Faster algorithms for feedback arc set tournament, Kemeny rank aggregation and betweenness tournament", in Cheong, Otfried; Chwa, Kyung-Yong; Park, Kunsoo (eds.), Algorithms and Computation - 21st International Symposium, ISAAC 2010, Jeju Island, Korea, December 15-17, 2010, Proceedings, Part I, Lecture Notes in Computer Science, vol. 6506, Springer, pp. 3–14, arXiv:1006.4396, doi:10.1007/978-3-642-17517-6_3, S2CID 16512997
15. Unger, Stephen H. (April 26, 1957), A study of asynchronous logical feedback networks, Technical reports, vol. 320, Massachusetts Institute of Technology, Research Laboratory of Electronics, hdl:1721.1/4763
16. Feehrer, John R.; Jordan, Harry F. (December 1995), "Placement of clock gates in time-of-flight optoelectronic circuits", Applied Optics, 34 (35): 8125–8136, Bibcode:1995ApOpt..34.8125F, doi:10.1364/ao.34.008125, PMID 21068927
17. Rosen, Edward M.; Henley, Ernest J. (Summer 1968), "The New Stoichiometry", Chemical Engineering Education, 2 (3): 120–125, archived from the original on 2021-08-02, retrieved 2021-08-02
18. Di Battista, Giuseppe; Eades, Peter; Tamassia, Roberto; Tollis, Ioannis G. (1998), "Layered Drawings of Digraphs", Graph Drawing: Algorithms for the Visualization of Graphs, Prentice Hall, pp. 265–302, ISBN 9780133016154
19. Bastert, Oliver; Matuszewski, Christian (2001), "Layered drawings of digraphs", in Kaufmann, Michael; Wagner, Dorothea (eds.), Drawing Graphs: Methods and Models, Lecture Notes in Computer Science, vol. 2025, Springer-Verlag, pp. 87–120, doi:10.1007/3-540-44969-8_5
20. Demetrescu, Camil; Finocchi, Irene (2001), "Break the "right" cycles and get the "best" drawing", ACM Journal of Experimental Algorithmics, 6: 171–182, MR 2027115
21. Even, G.; Naor, J.; Schieber, B.; Sudan, M. (1998), "Approximating minimum feedback sets and multicuts in directed graphs", Algorithmica, 20 (2): 151–174, doi:10.1007/PL00009191, MR 1484534, S2CID 2437790
22. Minoura, Toshimi (1982), "Deadlock avoidance revisited", Journal of the ACM, 29 (4): 1023–1048, doi:10.1145/322344.322351, MR 0674256, S2CID 5284738
23. Mishra, Sounaka; Sikdar, Kripasindhu (2004), "On approximability of linear ordering and related NP-optimization problems on graphs", Discrete Applied Mathematics, 136 (2–3): 249–269, doi:10.1016/S0166-218X(03)00444-X, MR 2045215
24. Hecht, Michael (2017), "Exact localisations of feedback sets", Theory of Computing Systems, 62 (5): 1048–1084, arXiv:1702.07612, doi:10.1007/s00224-017-9777-6, S2CID 18394348
25. Park, S.; Akers, S.B. (1992), "An efficient method for finding a minimal feedback arc set in directed graphs", Proceedings of the 1992 IEEE International Symposium on Circuits and Systems (ISCAS '92), vol. 4, pp. 1863–1866, doi:10.1109/iscas.1992.230449, S2CID 122603659
26. Nutov, Zeev; Penn, Michal (2000), "On integrality, stability and composition of dicycle packings and covers", Journal of Combinatorial Optimization, 4 (2): 235–251, doi:10.1023/A:1009802905533, MR 1772828, S2CID 207632524
27. Younger, D. (1963), "Minimum feedback arc sets for a directed graph", IEEE Transactions on Circuit Theory, 10 (2): 238–245, doi:10.1109/tct.1963.1082116
28. Lawler, E. (1964), "A comment on minimum feedback arc sets", IEEE Transactions on Circuit Theory, 11 (2): 296–297, doi:10.1109/tct.1964.1082291
29. Bodlaender, Hans L.; Fomin, Fedor V.; Koster, Arie M. C. A.; Kratsch, Dieter; Thilikos, Dimitrios M. (2012), "A note on exact algorithms for vertex ordering problems on graphs", Theory of Computing Systems, 50 (3): 420–432, doi:10.1007/s00224-011-9312-0, hdl:1956/4556, MR 2885638, S2CID 9967521
30. Chen, Jianer; Liu, Yang; Lu, Songjian; O'Sullivan, Barry; Razgon, Igor (2008), "A fixed-parameter algorithm for the directed feedback vertex set problem", Journal of the ACM, 55 (5): 1–19, doi:10.1145/1411509.1411511, S2CID 1547510
31. Bonamy, Marthe; Kowalik, Lukasz; Nederlof, Jesper; Pilipczuk, Michal; Socala, Arkadiusz; Wrochna, Marcin (2018), "On directed feedback vertex set parameterized by treewidth", in Brandstädt, Andreas; Köhler, Ekkehard; Meer, Klaus (eds.), Graph-Theoretic Concepts in Computer Science - 44th International Workshop, WG 2018, Cottbus, Germany, June 27-29, 2018, Proceedings, Lecture Notes in Computer Science, vol. 11159, Springer, pp. 65–78, arXiv:1707.01470, doi:10.1007/978-3-030-00256-5_6, S2CID 8008855
32. Lin, Lishin; Sahni, Sartaj (1989), "Fair edge deletion problems", IEEE Transactions on Computers, 38 (5): 756–761, doi:10.1109/12.24280, MR 0994519
33. Schwikowski, Benno; Speckenmeyer, Ewald (2002), "On enumerating all minimal solutions of feedback problems", Discrete Applied Mathematics, 117 (1–3): 253–265, doi:10.1016/S0166-218X(00)00339-5, MR 1881280
34. Crescenzi, Pierluigi; Kann, Viggo; Halldórsson, Magnús; Karpinski, Marek; Woeginger, Gerhard (2000), "Minimum Feedback Arc Set", A compendium of NP optimization problems, archived from the original on 2021-07-29, retrieved 2021-07-29
35. Eades, Peter; Lin, Xuemin; Smyth, W. F. (1993), "A fast and effective heuristic for the feedback arc set problem", Information Processing Letters, 47 (6): 319–323, doi:10.1016/0020-0190(93)90079-O, MR 1256786, archived from the original on 2020-10-22, retrieved 2021-08-01
36. Berger, Bonnie; Shor, Peter W. (1997), "Tight bounds for the maximum acyclic subgraph problem", Journal of Algorithms, 25 (1): 1–18, doi:10.1006/jagm.1997.0864, MR 1474592
37. Hassin, Refael; Rubinstein, Shlomi (1994), "Approximations for the maximum acyclic subgraph problem", Information Processing Letters, 51 (3): 133–140, doi:10.1016/0020-0190(94)00086-7, MR 1290207
38. Newman, Alantha (June 2000), Approximating the maximum acyclic subgraph (Master’s thesis), Massachusetts Institute of Technology, hdl:1721.1/86548, as cited by Guruswami et al. (2011)
39. Gabow, Harold N. (1995), "Centroids, representations, and submodular flows", Journal of Algorithms, 18 (3): 586–628, doi:10.1006/jagm.1995.1022, MR 1334365
40. Frank, András (1981), "How to make a digraph strongly connected", Combinatorica, 1 (2): 145–153, doi:10.1007/BF02579270, MR 0625547, S2CID 27825518
41. Lucchesi, C. L.; Younger, D. H. (1978), "A minimax theorem for directed graphs", Journal of the London Mathematical Society, Second Series, 17 (3): 369–374, doi:10.1112/jlms/s2-17.3.369, MR 0500618
42. Gabow, Harold N. (1993), "A framework for cost-scaling algorithms for submodular flow problems", 34th Annual Symposium on Foundations of Computer Science, Palo Alto, California, USA, 3-5 November 1993, IEEE Computer Society, pp. 449–458, doi:10.1109/SFCS.1993.366842, MR 1328441, S2CID 32162097
43. Grötschel, Martin; Jünger, Michael; Reinelt, Gerhard (1985), "On the acyclic subgraph polytope", Mathematical Programming, 33 (1): 28–42, doi:10.1007/BF01582009, MR 0809747, S2CID 206798683
44. Ramachandran, Vijaya (1988), "Finding a minimum feedback arc set in reducible flow graphs", Journal of Algorithms, 9 (3): 299–313, doi:10.1016/0196-6774(88)90022-3, MR 0955140
45. Kenyon-Mathieu, Claire; Schudy, Warren (2007), "How to rank with few errors: a PTAS for weighted feedback arc set on tournaments", in Johnson, David S.; Feige, Uriel (eds.), Proceedings of the 39th Annual ACM Symposium on Theory of Computing, San Diego, California, USA, June 11-13, 2007, pp. 95–103, doi:10.1145/1250790.1250806, S2CID 9436948, ECCC TR06-144; see also author's extended version Archived 2009-01-15 at the Wayback Machine
46. Arora, Sanjeev; Frieze, Alan; Kaplan, Haim (2002), "A new rounding procedure for the assignment problem with applications to dense graph arrangement problems", Mathematical Programming, 92 (1): 1–36, doi:10.1007/s101070100271, MR 1892295, S2CID 3207086, archived from the original on 2021-08-03, retrieved 2021-08-03
47. Karp, Richard M. (1972), "Reducibility among combinatorial problems", Complexity of Computer Computations, Proc. Sympos. IBM Thomas J. Watson Res. Center, Yorktown Heights, N.Y., New York: Plenum, pp. 85–103
48. Garey, Michael R.; Johnson, David S. (1979), "A1.1: GT8", Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman, p. 192, ISBN 0-7167-1045-5
49. Alon, Noga (2006), "Ranking tournaments", SIAM Journal on Discrete Mathematics, 20 (1): 137–142, doi:10.1137/050623905, MR 2257251
50. Charbit, Pierre; Thomassé, Stéphan; Yeo, Anders (2007), "The minimum feedback arc set problem is NP-hard for tournaments" (PDF), Combinatorics, Probability and Computing, 16 (1): 1–4, doi:10.1017/S0963548306007887, MR 2282830, S2CID 36539840
51. Ausiello, G.; D'Atri, A.; Protasi, M. (1980), "Structure preserving reductions among convex optimization problems", Journal of Computer and System Sciences, 21 (1): 136–153, doi:10.1016/0022-0000(80)90046-X, MR 0589808
52. Kann, Viggo (1992), On the Approximability of NP-complete Optimization Problems (PDF) (Ph.D. thesis), Department of Numerical Analysis and Computing Science, Royal Institute of Technology, Stockholm, archived (PDF) from the original on 2010-12-29, retrieved 2007-10-11
53. Dinur, Irit; Safra, Samuel (2005), "On the hardness of approximating minimum vertex cover" (PDF), Annals of Mathematics, 162 (1): 439–485, doi:10.4007/annals.2005.162.439, archived (PDF) from the original on 2009-09-20, retrieved 2021-07-29; preliminary version in Dinur, Irit; Safra, Samuel (2002), "The importance of being biased", in Reif, John H. (ed.), Proceedings of the 34th Annual ACM Symposium on Theory of Computing, May 19-21, 2002, Montréal, Québec, Canada, pp. 33–42, doi:10.1145/509907.509915, S2CID 1235048
54. Guruswami, Venkatesan; Håstad, Johan; Manokaran, Rajsekar; Raghavendra, Prasad; Charikar, Moses (2011), "Beating the random ordering is hard: every ordering CSP is approximation resistant" (PDF), SIAM Journal on Computing, 40 (3): 878–914, doi:10.1137/090756144, MR 2823511, archived (PDF) from the original on 2021-07-31, retrieved 2021-07-31
55. Bonnet, Édouard; Paschos, Vangelis Th. (2018), "Sparsification and subexponential approximation", Acta Informatica, 55 (1): 1–15, arXiv:1402.2843, doi:10.1007/s00236-016-0281-2, MR 3757549, S2CID 3136275
56. Lovász, László (1976), "On two minimax theorems in graph", Journal of Combinatorial Theory, Series B, 21 (2): 96–103, doi:10.1016/0095-8956(76)90049-6, MR 0427138
57. Bar-Noy, Amotz; Naor, Joseph (1990), "Sorting, minimal feedback sets, and Hamilton paths in tournaments", SIAM Journal on Discrete Mathematics, 3 (1): 7–20, doi:10.1137/0403002, MR 1033709
58. Spencer, J. (1980), "Optimally ranking unrankable tournaments", Periodica Mathematica Hungarica, 11 (2): 131–144, doi:10.1007/BF02017965, MR 0573525, S2CID 119894999
59. Fernandez de la Vega, W. (1983), "On the maximum cardinality of a consistent set of arcs in a random tournament", Journal of Combinatorial Theory, Series B, 35 (3): 328–332, doi:10.1016/0095-8956(83)90060-6, MR 0735201
60. Barthélémy, Jean-Pierre; Hudry, Olivier; Isaak, Garth; Roberts, Fred S.; Tesman, Barry (1995), "The reversing number of a digraph", Discrete Applied Mathematics, 60 (1–3): 39–76, doi:10.1016/0166-218X(94)00042-C, MR 1339075
61. Isaak, Garth; Narayan, Darren A. (2004), "A classification of tournaments having an acyclic tournament as a minimum feedback arc set", Information Processing Letters, 92 (3): 107–111, doi:10.1016/j.ipl.2004.07.001, MR 2095357
62. Huang, Hao; Ma, Jie; Shapira, Asaf; Sudakov, Benny; Yuster, Raphael (2013), "Large feedback arc sets, high minimum degree subgraphs, and long cycles in Eulerian digraphs", Combinatorics, Probability and Computing, 22 (6): 859–873, arXiv:1202.2602, doi:10.1017/S0963548313000394, hdl:20.500.11850/73894, MR 3111546, S2CID 7967738
63. Hanauer, Kathrin; Brandenburg, Franz-Josef; Auer, Christopher (2013), "Tight upper bounds for minimum feedback arc sets of regular graphs", in Brandstädt, Andreas; Jansen, Klaus; Reischuk, Rüdiger (eds.), Graph-Theoretic Concepts in Computer Science - 39th International Workshop, WG 2013, Lübeck, Germany, June 19-21, 2013, Revised Papers, Lecture Notes in Computer Science, vol. 8165, Springer, pp. 298–309, doi:10.1007/978-3-642-45043-3_26, MR 3139198
| Wikipedia |
Wallman compactification
In mathematics, the Wallman compactification, generally called Wallman–Shanin compactification is a compactification of T1 topological spaces that was constructed by Wallman (1938).
Definition
The points of the Wallman compactification ωX of a space X are the maximal proper filters in the poset of closed subsets of X. Explicitly, a point of ωX is a family ${\mathcal {F}}$ of closed nonempty subsets of X such that ${\mathcal {F}}$ is closed under finite intersections, and is maximal among those families that have these properties. For every closed subset F of X, the class ΦF of points of ωX containing F is closed in ωX. The topology of ωX is generated by these closed classes.
Special cases
For normal spaces, the Wallman compactification is essentially the same as the Stone–Čech compactification.
See also
• Lattice (order)
• Pointless topology
References
• Aleksandrov, P.S. (2001) [1994], "Wallman_compactification", Encyclopedia of Mathematics, EMS Press
• Wallman, Henry (1938), "Lattices and topological spaces", Annals of Mathematics, 39 (1): 112–126, doi:10.2307/1968717, JSTOR 1968717
| Wikipedia |
What is the Noether's theorem and how it break the law of conservation?
I was in http://worldbuilding.stackexchange.com when I read a question about how to explain magic breaking physical laws, and in one answer they talk about the Noether's theorem and how to break the law of conservation. I couldn't understand nothing (I am not sure if I can't understand the law or it's a translation barrier) so I decided to search a little more and I found a lot of information in Physics.SE but their are too complicate and I can't understand anything. Could someone explain me what is the Noether's theorem in a simple way? I mean a basic explanation, no hard-science.
lagrangian-formalism conservation-laws symmetry noethers-theorem
Ender LookEnder Look
$\begingroup$ Related: physics.stackexchange.com/q/4959/2451 , physics.stackexchange.com/q/91722/2451 , physics.stackexchange.com/q/325916/2451 and links therein. $\endgroup$ – Qmechanic♦ Jun 13 '17 at 19:38
Could someone explain me what is the Noether's theorem in a simple way? I mean a basic explanation, no hard-science.
(1) Start with something called the Lagrangian of a system and the rules for deriving the equations of motion from the Lagrangian.
(2) A symmetry of the Lagrangian is a transformation (of coordinates etc.) that leaves the Lagrangian unchanged and thus, the equations of motion unchanged.
(3) Noether showed that there is a conservation law associated with a (differentiable) symmetry of the Lagrangian.
Here are some examples.
If the Lagrangian is unchanged by a translation in space (loosely, moving the origin of the coordinate system) we say that the Lagrangian has a spatial translation symmetry and the associated conservation law is conservation of (linear) momentum.
If the Lagrangian is unchanged by a rotation, there is a rotational symmetry and the associated conservation law is conservation of angular momentum.
If the Lagrangian is unchanged by a translation in time (change the zero on the clock), there is a temporal translation symmetry and the associated conservation law is conservation of energy.
So, for example, to 'break' conservation of (linear) momentum would require that physical law somehow depend on location so that the Lagrangian is changed by a spatial translation.
On a more advanced note, the Standard Model (the best model we have of the elementary particles and their interactions) is based in part on the idea that the fundamental interactions are most elegantly introduced by promoting a global symmetry of a Lagrangian to a local symmetry which requires the introduction of new terms in the Lagrangian called gauge fields.
For example, the electromagnetic field is a gauge field that is required to give the Lagrangian a local U(1) symmetry
This is well beyond the scope of your question but I put this here just to emphasize how important symmetries of the Lagrangian and Noether's theorem are to fundamental physics.
Hal HollisHal Hollis
$\begingroup$ When you said "If the Lagrangian is unchanged..." do you mean with unchanged all the "normal" physics laws? $\endgroup$ – Ender Look Jun 13 '17 at 20:51
$\begingroup$ @EnderLook, I'm not sure I understand what you're asking; I don't understand "with unchanged all the "normal" physics laws?" $\endgroup$ – Hal Hollis Jun 13 '17 at 21:02
$\begingroup$ @EnderLook, for example, think of Newton's theory of gravity (and assume it is the correct description of gravity) and the gravitational constant $G$. Since $G$ is constant in time, energy is conserved in Newtonian gravity. However, if $G$ changed with time, energy would not be conserved. For example, if $G$ increased with time, then one could hoist a large mass up to the top of a tower, wait some time for $G$ to increase, and then get 'free' energy by letting the mass do work as it falls. The amount of work to raise the mass would be less than the amount of work done lowering the mass. $\endgroup$ – Hal Hollis Jun 13 '17 at 21:12
Noether's theorem relates conservation laws to symmetries of a system.
What is a conservation law?
A conservation law is a statement that a measurable property of a system does not change with time. For example, if we denote the total energy of a system to be $H$, then energy conservation is the statement that,
$$\frac{dH}{dt} = 0$$
which if you're not familiar with calculus is essentially saying the change in energy over some period of time is zero, as expected if it were conserved.
What is a symmetry?
Mathematically, we often describe systems in physics using a Lagrangian, and one can make precise the meaning of a symmetry.
In layman's terms, it essentially means that the physics of a system remain the same if we make a certain change. For example, time translation invariance means that if we shift time $t\to t+c$, then the physical laws remain the same.
Noether's theorem relates to every conservation law, a physical symmetry of a system. In fact, in general, it shows us how to compute the conserved quantity given a symmetry. For example, in the case of time translation, one can show the conserved quantity is $H$, the total energy.
Symmetry Breaking
As it now relates to your world-building concern, if you want to break a conservation law, this can only be possible if the system no longer possesses the symmetry, or a more subtle case.
We have explicit symmetry breaking which means the system flat out does not possess the symmetry. The other case is spontaneous symmetry breaking which is when the equations that describe a system have the symmetry, but a state of the system does not.
In the more complicated case of spontaneous breaking, Goldstone's theorem shows there are rather deep implications; the Higgs boson in the Standard Model relates to this, for example.
JamalSJamalS
$\begingroup$ Thanks for your answer but could you please explain a little about that 3 items that have the Worldbuilding.SE question? • Time invariance, • Translational invariance, •Rotational invariance. The first I belive that you have already explained, the second I belive that I get it, but I didn't understand the third. $\endgroup$ – Ender Look Jun 13 '17 at 19:57
$\begingroup$ @EnderLook If a system has rotational invariance, it means the physics stays the same if we rotate it. The conservation law this gives rise to is that of conservation of angular momentum. Basically, momentum conservation is because we can make translations in position and the physics stays the same, so the angular momentum is because we can make "translations" in "angles" and the physics stays the same. $\endgroup$ – JamalS Jun 13 '17 at 19:59
The short version is this: every conserved quantity in physics (like energy and momentum) corresponds to some way in which the physical laws that govern reality are invariant (don't change) when you transform the situation somehow. For example, if I hit a puck on an air hockey table why does it move in a straight line at constant speed (ignoring friction)? The answer is because the table is flat - it's invariant under translations. If the table were curved then the puck wouldn't move in a straight line, even without gravity.
Sean E. LakeSean E. Lake
See Wikipedia's article on "Noether's Theorem"
Noether's (first)1 theorem states that every differentiable symmetry of the action of a physical system has a corresponding conservation law.
For example, I would think, if one has a ring of charge and rotates that ring in its plane, the charge is still conserved:
From: https://en.wikipedia.org/wiki/Charge_conservation#Connection_to_gauge_invariance
Charge conservation can also be understood as a consequence of symmetry through Noether's theorem, a central result in theoretical physics that asserts that each conservation law is associated with a symmetry of the underlying physics...
Stephen ElliottStephen Elliott
$\begingroup$ Hi Stephen, the down vote is probably because the symmetry associated with conservation of electric charge is the global gauge invariance of the electromagnetic Lagrangian $\endgroup$ – Hal Hollis Jun 13 '17 at 20:00
$\begingroup$ Hello Hal Hollins - Could you explain yourself further with references? I feel hesitant to ask a question why Noether's theorem does not apply to charge as it seems like a conserved quantity, even under a single rotation in the plane of the ring to me from the definition. $\endgroup$ – Stephen Elliott Jun 19 '17 at 5:50
$\begingroup$ Stephen, Noether's theorem does apply to conservation of charge and I didn't imply otherwise. The problem with your answer is that the symmetry associated with conservation of charge, by Noether's theorem, has nothing at all to do with the rotation of a ring of charge. Instead, it has to do with an abstract symmetry called global U(1) gauge invariance. See, for example, this $\endgroup$ – Hal Hollis Jun 20 '17 at 17:20
$\begingroup$ The edit has hardly improved things - if anything, it's only made a misleading answer more misleading. The conservation of electric charge has nothing to do with physical rotations (it's a rather more involved symmetry; see physics.stackexchange.com/q/230712 and physics.stackexchange.com/q/48305, or indeed the Wikipedia article already linked to, for details). At present this is being misattributed in a way that's directly misleading. $\endgroup$ – Emilio Pisanty Jun 21 '17 at 9:08
$\begingroup$ ... none of which, however, touches the core problem of this answer, which is that it is simply not relevant to the question as posed. $\endgroup$ – Emilio Pisanty Jun 21 '17 at 9:10
protected by Qmechanic♦ Jun 13 '17 at 20:02
Not the answer you're looking for? Browse other questions tagged lagrangian-formalism conservation-laws symmetry noethers-theorem or ask your own question.
Can Noether's theorem be understood intuitively?
Layman's version of Noether's Theorem (or the intuition behind it)
Noether's first theorem and classical proof of electric charge conservation
What symmetry gives you charge conservation?
Emmy Noether's theorem in simpler terms
From Noether's Theorem, is it true that the law of conservation of energy can be proved?
How do fundamental symmetries vanish at the macroscopic level?
What would be the consequences of a preferred reference frame?
What conservation law is implied by the symmetry between Newton's law of gravitation and Coulomb's law?
What is the relation between Noether's theorem and uncertainty relation? | CommonCrawl |
Physics Meta
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.
Deriving shallow water equations from Euler's equations
I would like to derive the one-dimensional shallow water equations from Eulers's equations. This works perfectly for the conservation of mass. Especially the meaning of the longitudinal fluid velocity $\bar u$ in the shallow water equations becomes clear. It can be interpreted as average of the longitudinal velocity in Euler's equations over the height above ground level.
But, in Euler's balance of momentum the longitudinal velocity occurs squared and the average of the squared velocity is not necessarily equal to the square of the averaged velocity. I am stuck at this point.
Next, there follows what I have so far.
propagation in the direction of the $x$-axis (unit vector $\vec{e}_x$)
$y$-axis points upwards (unit vector $\vec{e}_y$)
everything is constant in $z$-direction (unit vector $\vec{e}_z$; this direction is mostly left out). Volume integrals become area integrals and surface integrals become line integrals. If the path runs with positive circulation the outer surface normal can be calculated via $d\vec{r}\times\vec{e}_z$.
incompressible medium; The density $\rho(x,y,t)$ is constant.
The ground at $y=0$ is flat. The water height is $h(x,t)$. Region of water: $0\leq y \leq h(x,t)$, region of air: $h(x,t) < y$.
The static relative pressure (w.r.t. atmospheric pressure) is $p(x,y,t)=g\rho(h(x,y,t)-y)$ where $g$ is the gravitation constant.
No friction.
We describe the fluid motion through the height above ground $h(x,t)$ and the velocity filed $\vec{v}(x,y,t)$ in the fluid region.
At first the "working" case of Euler's mass balance. A fluid motion satisfies Euler's mass balance if for all parts $A$ of the fluid cross-section area in the $(x,y)$-plane there holds the equation $$ \partial_t\int_{A} \rho dA + \int_{\partial A} \rho\, \vec{v}(x,y,t)\cdot d\vec{r}\times\vec{e}_z = 0. $$ Euler's mass balance leads to the mass balance of the shallow water equations if one restricts the choice of areas to stripes $A:=\{(x,y)\in[x_1,x_2]\times\mathbb{R}\mid 0\leq y \leq h(x,t)\}$ for $x_1 < x_2$.
On the bottom $y=0$ the velocity $v(x,0,t)$ is parallel to the path element $d\vec{r}$, the spat product in the path integral over $\partial A$ is zero and thus the contribution of this section of $\partial A$ to the path integral in Euler's mass balance is zero.
Because of the height changing with time there is actually a normal component of the velocity at the top. But, this is already considered in the time-dependent area in the area integral.
It is easier to consider the fluid mass between the half planes $A_x:=\{(x,y,z)\in\mathbb{R}^3\mid y\geq 0\}$ (note the fixed $x$) at $x=x_1$ and $x=x_2$. The growth of this fluid mass together with the out-flow of the fluid mass through the planes $A_{x_1}$, $A_{x_2}$ must balance to zero. This directly leads directly to the formula $$ \partial_t \int_{x_1}^{x_2}h(x,t)dx + \left[\int_{0}^{h(x,t)}u(x,y,t)dy\right]_{x=x_1}^{x_2} = 0 $$ where $u(x,y,t)=\vec{e}_x\cdot \vec{v}(x,y,t)$ is the $x$-component of the fluid velocity $\vec{v}$. Thereby, we have already divided through the constant density $\rho$. With the averaged longitudinal velocity $$ \bar u(x,t):=\frac1{h(x,t)}\int_0^{h(x,t)}u(x,y,t)dy $$ the last formula gives after differentiation w.r.t. $x_2$ and renaming $x_2\mapsto x$ the mass balance of the shallow water equations:
$$ \partial_t h(x,t) + \partial_x \bigl(h(x,t) \bar u(x,t)\bigr) =0 $$
Now, the more difficult case of the momentum balance. The momentum balance from Euler's equations is satisfied if for all parts $A$ of the fluid cross-section area in the $(x,y)$-plane the equation $$ \partial_t \int_{A} \rho \vec{v} d A + \int_{\partial A} \rho \vec{v} \vec{v}\cdot d \vec{r}\times \vec{e}_z = \int_{\partial A} p d \vec{r}\times \vec{e}_z = \int_{\partial A} g\rho \left(h(x,t)-y\right) d \vec{r}\times \vec{e}_z $$ is satisfied.
We get rid of the constant density and only consider the $x$-component of this formula by multiplying it with $\frac1{\rho}\vec{e}_x$ and we restrict the area to sections $A=\{(x,y)\in[x_1,x_2]\times\mathbb{R}\mid 0\leq y\leq h(x,t)\}$ with $x_1<x_2$. For simplification we again integrate over the half planes $A_{x_1}$ and $A_{x_2}$ $$ \partial_t\int_{x_1}^{x_2} \int_0^{h(x,t)} u(x,y,t) d y\, d x + \left[ \int_{y=0}^{h(x,t)} u^2(x,y,t)d y \right]_{x=x_1}^{x_2}= \left[ \frac12 gh(x,t)^2 \right]_{x=x_1}^{x_2} $$ Substituting $\int_0^h u dy=h\cdot \bar u$ yields $$ \partial_t \int_{x_1}^{x_2} h(x,t)\bar u(x,t) d x + \left[ \int_{y=0}^{h(x,t)} u^2(x,y,t)d y \right]_{x=x_1}^{x_2}= \left[ \frac12 gh(x,t)^2 \right]_{x=x_1}^{x_2}. $$ Here, I am stuck. The substitution $\int_0^h u^2 dy = h {\bar u}^2$ is possible if $u(x,y,t)$ is independent of the height $y$.
If this is really the case what is the reasoning for this assumption? How do we come to know the vertical distribution of $u$?
If this assumption is true then we arrive at the impulse balance for the shallow water equations through differentiation w.r.t. $x_2$ and renaming $x_2\mapsto x$: $$ \partial_t\bigl(h\bar u\bigr)(x,t) + \partial_x \left(\left((h\bar u)(x,t)\right)^2 - \frac 12 g h(x,t)^2\right)=0 $$
There is a similar derivation at http://www.whoi.edu/fileserver.do?id=136564&pt=10&p=85713. But, there it is just assumed that $u$ does not depend on $y$.
fluid-dynamics
asked Jan 9, 2014 at 12:15
TobiasTobias
Your analysis is absolutely correct. One of the reasons that the shallow water equations contain the word "shallow" in their title is that, in terms of your coordinate system, they assume that the variation of the x-velocity along the y-dimension of the fluid is negligible. This would not be reasonable in general if the vertical height of the fluid were large compared to lateral length scales (i.e. the wavelengths that contain most of the energy of the fluid motion). It is reasonable if the vertical length scale is small compared to other length scales.
To be more explicit, consider some type of shallow water wave, like a gravity wave, with wavenumber $k$ (units length$^{-1}$) and a fluid of height $h$. The shallow water equations are suitable when you have most of your energy in waves that satisfy $kh << 1$. Otherwise, you have to use the full fluid equations.
This is all discussed in the wikipedia article on the shallow water equations. You should consider looking at this article by David Randall, which is presently linked from the wikipedia article.
You might also consider looking at Landau & Lifshitz book on fluid mechanics, sections 9 through 14 to see a treatment of shallow water gravity waves from the point of view of velocity potential.
Finally, be wary of the effects of surface tension on shallow water gravity waves. If it's important, you're dealing with capillary waves. See section 62 of Landau & Lifshiftz.
kleingordonkleingordon
$\begingroup$ Okay I have to look deeper into the statement "horizontal pressure gradients are due to the displacement of the pressure surface, implying that the horizontal velocity field is constant throughout the depth of the fluid" from en.wikipedia.org/wiki/Shallow_water_equations. Thanks for the hint. This ensures you the bounty. I will see whether I can require Lifshitz in our Library. $\endgroup$
– Tobias
Thanks for contributing an answer to Physics Stack Exchange!
Shallow water wave question from Acheson's book
Momentum Equations for Micropolar Fluid
Landau & Lifshitz - Euler's equation for one-dimensional flow
Variational form of Euler's incompressible fluid equations?
External Forces from Fluid Motion
Euler's Equations for Isentropic Flow Derivation
Porous media flow equation in terms of fluid compressibility and determination of coefficients?
Euler's Equation applied to Perfect Fluid
Biological Key for Yard Weeds? | CommonCrawl |
Home Journals I2M A Fractional Lower-order Bi-spectrum Estimation Method Based on Autoregressive Model
A Fractional Lower-order Bi-spectrum Estimation Method Based on Autoregressive Model
Baohai Yang
College of Physics and Electronic Engineering, Guangxi Normal University for Nationalities, Chongzuo 532200, China
[email protected]
The traditional fractional lower-order (FLO) spectrum estimation method cannot observe a sufficient volume of data, leading to a large variance of estimation results. To solve the problem, this paper puts forward a FLO bi-spectrum estimation method based on autoregressive (AR) model, and gives new definitions for FLO three order cumulant. The author discussed the determination of AR model parameters, and introduced how to implement the bi-spectrum estimation method based on AR model. Then, a series of tests were performed to verify the correctness of our method. The results show that our method outperformed the traditional approaches in suppressing FLO noise and identifying relevant information of signals.
autoregressive (AR) model, bi-spectrum, fractional lower-order (FLO) statistics, three order cumulant, signal processing
In the field of signal processing, the noise is mostly described by Gaussian distribution model, which has an excellent effect on two-order statistics of normal distribution [1]. When it comes to the frequency-domain analysis of noise, the existing theoretical methods include spectral feature analysis, time-frequency analysis, spectrum estimation, colored noise whitening, spatial spectrum estimation, frequency estimation, harmonic estimation and spectral line restoration. Most of these methods are based on two-order statistics like bi-spectrum and tri-spectrum [2].
Gaussian white noise and colored noise models have always occupied the leading position in signal processing, and the criterion of white noise and colored noise based on correlation function and power spectral density has been regarded as a classical rule. In practical applications, however, many noises do not conform to the normal distribution. Typical examples include low-frequency atmospheric noise and underwater noise. A viable option to describe such noises lies in setting up an α stable distribution, whose statistical features can be characterized by the relevant parameters of the feature function [3].
The best way to solve the non-Gaussian α stable distribution is the fractional lower-order (FLO) statistics. If α is greater than 2, the harmonic frequencies can be estimated accurately; otherwise, the second-order matrix does not exist, making it impossible to effectively analyze the noise [2]. To solve the problem, the FLO bi-spectrum has been conceptualized. Traditionally, the lower-order bi-spectrum is estimated by bi-spectrum and nonparametric methods. Nonetheless, these methods cannot observe a sufficient volume of data, leading to a large variance of estimation results.
In view of the above, this paper puts forward an FLO bi-spectrum estimation method based on autoregressive (AR) model [5], and compares the method with traditional approaches. The comparison shows that our method outperformed the traditional ones in spectral flatness and suppression of FLO noise.
2. α Stable Distribution and FLO Statistics
This section briefly introduces the α stable distribution, a generalized conceptual Gaussian distribution with a wide application scope, and gives the definition to the relevant FLO statistics, which is the best way to filter the noise of non-Gaussian α stable distribution.
2.1 α stable distribution
A random variable X satisfies the α stable distribution if the parameters 0≤α≤2, γ≥0 and -1≤β≤1 have the following relationship with real numbers α:
$\varphi (t)=\exp \{jat-\gamma {{\left| t \right|}^{a}}[1+j\beta sgn (t)\omega (t,\alpha )]\}$ (1)
$\omega (t,\alpha )=\left\{ \begin{matrix}\begin{matrix} \tan (\pi \alpha /2), & \alpha \ne 1 \\\end{matrix} \\ \begin{matrix}(2/\pi )\log \left| t \right|, & \alpha =1 \\\end{matrix} \\\end{matrix} \right.$
$sgn (t)=\left\{ \begin{matrix}\begin{matrix} 1, & t>0 \\\end{matrix} \\\begin{matrix} 0, & t=0 \\\end{matrix} \\\begin{matrix} -1, & t<0 \\\end{matrix} \\\end{matrix} \right.$
Note that α is a feature index in the interval (0, 2]. The index determines the shape of the distribution. The value of the index is negatively correlated with the trailing thickness and the pulse feature [6].
2.2 FLO statistics
(1) FLO moments. The FLO moments refer to the various moments existing at 0<α<2 (i.e. the absence of secondary moment). For the random variable SαS, the FLO moments can be described by its dispersion coefficient and feature index. Meanwhile, the covariation of the two random variables η and ζ can be expressed as the function of FLO moments:
${{\left[ \xi ,\eta \right]}_{\alpha }}=\frac{E\left( \xi {{\eta }^{<p-1>}} \right)}{E\left( {{\left| \eta \right|}^{p}} \right)}{{\gamma }_{\eta }}$ (2)
where the right superscript * is a complex conjugate; γη is the bias coefficient of the stochastic process η:
$\gamma _{\eta }^{p/\alpha }=\frac{E\left( {{\left| \eta \right|}^{p}} \right)}{C\left( p,\alpha \right)}\begin{matrix}, & 0<p<\alpha \\\end{matrix}$ (3)
$C\left( p,\alpha \right)=\frac{{{2}^{p+1}}\Gamma \left( \frac{p+2}{2} \right)\Gamma \left( -\frac{p}{\alpha } \right)}{\alpha \Gamma \left( \frac{1}{2} \right)\Gamma \left( -\frac{p}{2} \right)}$ (4)
where E(·) is the mathematical expectation; Г(·) is the function gamma:
$\Gamma \left( x \right)=\int_{0}^{\infty }{{{t}^{x-1}}{{e}^{-t}}dt}$ (5)
The covariation coefficients η and ζ can be described as:
${{\lambda }_{\xi \eta }}=\frac{{{\left[ \xi ,\eta \right]}_{\alpha }}}{{{\left[ \eta ,\eta \right]}_{\alpha }}}=\frac{E\left( \xi {{\eta }^{<p-1>}} \right)}{E\left( {{\left| \eta \right|}^{p}} \right)}$ (6)
If η and ζ are real numbers, then 0.5<p<α; If η and ζ are complex numbers, then 0<p<α.
In engineering application, it is customary to take the improved FLO moments as the estimators of covariation coefficients:
${{\overset{\wedge }{\mathop{\lambda }}\,}_{\xi \eta }}\left( p \right)=\frac{\sum\limits_{i=1}^{N}{{{\xi }_{i}}\eta _{i}^{<p-1>}}}{\sum\limits_{i=1}^{N}{{{\left| {{\eta }_{i}} \right|}^{p}}}}$ (7)
(2) Negative moment. Let X be a random variable SαS, with δ=0 being its position function and γ being its dispersion coefficient. Then, the negative moment can be expressed as:
$\begin{matrix}E\left( {{\left| X \right|}^{p}} \right)=C\left( p,\alpha \right){{\gamma }^{p/\alpha }}, & -1<p<\alpha \\\end{matrix}$ (8)
(3) Covariation. The concept of covariation was presented by Miller in 1978 [7]. If 0<α≤2, then the covariation of two random variables, X and Y, with joint stable distribution relations, can be defined as:
${{\left[ X,Y \right]}_{xx}}=\int\limits_{S}{x{{y}^{<\alpha -1>}}m\left( ds \right)}$ (9)
where S is an unit circle; m(·) is the spectral measure in SαS distribution. Then, the covariation coefficient can be defined as:
${{\lambda }_{x,y}}=\frac{{{\left[ X,Y \right]}_{\alpha }}}{{{\left[ Y,Y \right]}_{\alpha }}}$ (10)
Let γy be the dispersion coefficient of Y. Under 0<α≤2, the covariation of random variables X and Y of SαS distribution can be established as:
${{\left[ Y,Y \right]}_{\alpha }}=\left\| Y \right\|_{\alpha }^{\alpha }={{\gamma }_{y}}$ (11)
${{\lambda }_{XY}}=\frac{E\left( X{{Y}^{<p-1>}} \right)}{E{{\left( \left| Y \right| \right)}^{p}}}\begin{matrix}, & 1\le p<\alpha \\\end{matrix}$ (12)
${{\left[ X,Y \right]}_{\alpha }}=\frac{E\left( X{{Y}^{<p-1>}} \right)}{E\left( {{\left| Y \right|}^{p}} \right)}{{\gamma }_{y}}\begin{matrix}, & 1\le p<\alpha \\\end{matrix}$ (13)
(4) FLO covariance. The existing covariation applies to the range of 1<α≤2, but not defined for the range α≤1 in the SaS distribution. Reference [8] proposes a more general FLO statistic that applies to the entire value range of α. Under 0<α≤2, the FLO covariance of random variables X and Y of SαS distribution can be defined as:
$FLOC(X,Y)=E({{X}^{<A>}}{{Y}^{<B>}})\begin{matrix}\begin{matrix}, & 0\le A<\frac{\alpha }{2}, \\\end{matrix} & 0\le B<\frac{\alpha }{2} \\\end{matrix}$ (14)
3. FLO Bi-spectrum
3.1 Definition of FLO bi-spectrum
The statistical moment of a signal tells a lot about the signal features. The spectra in statistical moments extend from low order to infinite order [9]. Only secondary moments were explored in traditional signal processing. Nevertheless, the processing methods may suffer from poor performance and high error, using variance or second-order statistics only. Recent years has seen the emergence of signal processing techniques for higher-order statistics, especially third- or fourth-order statistics. The emerging techniques not only utilize the second- or higher-order statistics, but also many fractional order statistics under the second order, i.e. the FLO statistics [10].
Both theoretical and empirical analyses prove that the FLO statistics are suitable for processing impulsive feature signals and noises. However, the FLO statistics also have two prominent defects [11]. First, there is no universal framework against algebraic smearing. Second, there is a theoretical connection between the priori knowledge of the order p and the random variable α, because the order p of moments is generally limited to (0, α). If p≥α, the FLO statistics cannot work normally. As a result, the traditional α stable distribution does not contain bi-spectrum or tri-spectrum.
Based on the traditional bi-spectrum, this paper proposes the FLO bi-spectrum, and develops the nonparametric bi-spectrum estimation method for the environment of FLO noise, and verifies its performance through comparative experiments. The FLO bi-spectrum can be defined as:
${{B}_{x}}({{\omega }_{1}},{{\omega }_{2}})=\sum\limits_{{{\tau }_{1}}=-\infty }^{+\infty }{\sum\limits_{{{\tau }_{2}}=-\infty }^{+\infty }{{{C}_{3x}}({{\tau }_{1}},{{\tau }_{2}})\exp [-j(}}{{\omega }_{1}}{{\tau }_{1}}+{{\omega }_{2}}{{\tau }_{2}})]$ (15)
Besides, the FLO three order cumulants were redefined as:
${{C}_{3x}}(m,n)={{\gamma }_{3e}}\sum\limits_{i=0}^{\infty }{{{[x(i)]}^{\langle A\rangle }}{{[x(i+m)]}^{\langle B\rangle }}[x(i+n)}{{]}^{\langle C\rangle }}$ (16)
where x(i) is the signal sequence; γ3e is an adjustment coefficient; 0<A+B+C≤α (0<α<2).
3.2 FLO bi-spectrum estimation with nonparametric direct estimation
The FLO bi-spectrum estimation with nonparametric direct estimation refers the nonlinear transformation of the data x(n):
$g(x(t),A)={{(x(t))}^{<A>}},0\le A\le \alpha /3$ (17)
After the transformation, the three order statistics of the stochastic process x(n) remain the same. Next, the transformed data can be divided into K segments, each of which has M samples, i.e. N=KM. Then, the sample mean of each segment can be determined [12], allowing the overlap between two adjacent two segments. Then, the discrete Fourier transform (DFT) coefficients can be calculated as:
${{X}^{(k)}}(\lambda )=\frac{1}{M}\sum\limits_{i=1}^{M}{{{x}^{(k)}}(n){{e}^{-j2\pi n\lambda /M}}}$ (18)
where λ=0, 1, ..., M/2; k=1, ..., K. According to the DFT coefficients, the FLO bi-spectrum estimation of each segment can be obtained [13]:
$\overset{\wedge }{\mathop{b}}\,_{x}^{k}({{\lambda }_{1}},{{\lambda }_{2}})=\frac{1}{\Delta _{0}^{2}}\sum\limits_{{{i}_{1}}=-{{L}_{1}}}^{{{L}_{1}}}{\sum\limits_{{{i}_{2}}=-{{L}_{1}}}^{{{L}_{1}}}{{{\overset{\wedge }{\mathop{X}}\,}^{k}}({{\lambda }_{1}}+{{i}_{1}}){{\overset{\wedge }{\mathop{X}}\,}^{k}}({{\lambda }_{2}}+{{i}_{2}}){{\overset{\wedge }{\mathop{X}}\,}^{k}}({{\lambda }_{1}}+{{i}_{1}}+{{\lambda }_{2}}+{{i}_{2}})}}$ (19)
where k=1, … , K; 0≤λ2≤λ1, λ1+λ2≤fs⁄2;
Δ0=fs⁄N0 (N0 and L1 satisfy M=(2L1+1)N0). The bi-spectrum estimation of the given data x(0), x(1), …, x(N-1) can be determined by the mean value of the K segment bi-spectrum estimation:
${{\overset{\wedge }{\mathop{B}}\,}_{x}}({{\omega }_{1}},{{\omega }_{2}})=\frac{1}{K}\sum\limits_{k=1}^{K}{{{\overset{\wedge }{\mathop{b}}\,}_{k}}({{\omega }_{1}},{{\omega }_{2}})}$ (20)
where ${{\omega }_{1}}=\frac{2\pi {{f}_{s}}}{{{N}_{0}}}{{\lambda }_{1}}$; ${{\omega }_{2}}=\frac{2\pi {{f}_{s}}}{{{N}_{0}}}{{\lambda }_{2}}$.
3.3 FLO bi-spectrum estimation with nonparametric indirect estimation
Assuming that the observed data {x(i)}(i=1,2,⋯,N) is a real random sequence, the three order cumulants of the observed data {x(i)} should be estimated, and then subjected to the DFT, completing the bi-spectrum estimation. The algorithm procedure can be described as:
(1) Divide the observed data {x(i)} of the length N into K segments, each of which has M points, N=KM, or divide the data in such a manner that the adjacent segments have a half overlap, 2N=KM;
(2) Remove the mean value of each segment, and make the mean value of the data to be analyzed zero;
(3) Let {xj(i)}(i=1,2,⋯,M; j=1,2,⋯,K) be the j segment. Then, estimate the lower-order three order cumulant of each segment:
$\overset{\wedge }{\mathop{C}}\,_{3x}^{(j)}(m,n)=\frac{1}{M}\sum\limits_{i={{k}_{1}}}^{{{k}_{2}}}{{{[{{x}^{(j)}}(i)]}^{\langle A\rangle }}{{[{{x}^{(j)}}(i+m)]}^{\langle B\rangle }}[{{x}^{(j)}}(i+n)}{{]}^{\langle C\rangle }}$ (21)
where 0<A+B+C≤α; k1=max{0, -m, -n}; k2=min{M, M-n, M-m}.
(4) Compute the statistical mean of $\hat{C}_{3x}^j (m,n)$, and determine cumulative estimates of K segments:
${{\overset{\wedge }{\mathop{C}}\,}_{3x}}(m,n)=\frac{1}{K}\sum\limits_{i=1}^{K}{\overset{\wedge }{\mathop{C}}\,_{3x}^{j}}(m,n)$ (22)
(5) Estimate the three order cumulants through Fourier transform, and thus obtain the FLO bi-spectrum estimation:
${{\overset{\wedge }{\mathop{B}}\,}_{x}}({{\omega }_{1}},{{\omega }_{2}})=\sum\limits_{m=-L}^{L}{\sum\limits_{n=-L}^{L}{{{\overset{\wedge }{\mathop{C}}\,}_{3x}}(m,n)\exp \{j(}}{{\omega }_{1}}m+{{\omega }_{2}}n)\}$ (23)
where L<M-1. During the estimation of the bi-spectrum $B┴∧_x (ω_1,ω_2)$, a 2D window function θ(m,n) can be adopted:
${{\overset{\wedge }{\mathop{B}}\,}_{x}}({{\omega }_{1}},{{\omega }_{2}})=\sum\limits_{m=-L}^{L}{\sum\limits_{n=-L}^{L}{{{\overset{\wedge }{\mathop{C}}\,}_{3x}}(m,n)\theta (m,n)\exp \{j(}}{{\omega }_{1}}m+{{\omega }_{2}}n)\}$ (24)
4. Parameter Determination for AR Model
If the excitation of a physical network by white noise ω(n) is viewed as a random signal of the power spectral density under normal conditions [14], then the p-order AR model can be established as:
$x(n)=\sum\limits_{k=1}^{p}{{{a}_{k}}x}(n-k)+\omega (n)$ (25)
If a z transformation is performed, the transfer function of the AR model can be expressed as:
$H(z)=\frac{B(z)}{A(z)}=\frac{1}{1+\sum\limits_{k=1}^{p}{ak{{z}^{-k}}}}$ (26)
where the H(z) belongs to the all-pole type, i.e. it only has poles [15]. The power spectral density of the AR model can be described as:
${{P}_{xx}}(\omega )=\frac{\sigma _{\omega }^{2}}{{{\left| 1+\sum\limits_{k=1}^{p}{ak{{e}^{-j\omega k}}} \right|}^{2}}}$ (27)
where $σ_ω^2$ is the power spectral density of white noise.
The parameters of the AR model can be determined in three ways, namely, correlation function, reflection coefficient and normal equation. The last approach is adopted in our research.
Assuming that {x(i)} is a stochastic process satisfying the difference equations of the AR model, we have:
$\sum\limits_{l=0}^{p}{{{a}_{l}}x(i-l)=\sum\limits_{{{l}^{'}}=0}^{q}{{{b}_{{{l}^{'}}}}e(i-{{l}^{'}})}}$ (28)
where al and b1 are two key parameters in the AR model. Then, the following three conditions must be satisfied:
(1) e(i) is zero-mean and a stationary, identically distributed, independent sequence; there must exist at least one nonzero K(K>2) order cumulant.
(2) The model is causal with a minimum phase; there must be no zero pole cancellation; the index must remain stable [16]; the transfer function can be expressed as:
$H(z)=\frac{B(z)}{A(z)}=\frac{1+b(1){{z}^{-1}}+\text{ }\cdots +b(q){{z}^{-q}}}{1+a(1){{z}^{-1}}+\cdots +a(p){{z}^{-p}}}=\sum\limits_{i=0}^{\infty }{h(i){{z}^{-i}}}$ (29)
(3) The additive noise n(i) must be present in x(i):
$y(i)=x(i)+n(i)$ (30)
where n(i) is the FLO noise, which is independent of x(i), in the stable distribution of symmetric distributed noise.
Under these conditions, the relation between the output and the impulse response of this system can be expressed as:
${{C}_{kx}}(m,n)={{\gamma }_{ke}}\sum\limits_{i=0}^{\infty }{{{h}^{k-2}}(i)h(i+m)h(i+n)}$ (31)
From the definition of the impulse response, we have:
$\sum\limits_{l=0}^{p}{{{a}_{i}}h(i-l)=\sum\limits_{{{l}^{'}}=0}^{q}{{{b}_{{{l}^{'}}}}\delta (i-{{l}^{'}})={{b}_{i}}}}$ (32)
According to (28)~(32), we have:
$\sum\limits_{l=0}^{p}{{{a}_{l}}{{C}_{kx}}(m-l,n)}={{\gamma }_{ke}}\sum\limits_{i=0}^{\infty }{{{h}^{k-2}}(i)h(i+n)b(i+m)}$ (33)
If m>q, i≥0 and b(i+m)=0, then the normal equation of the three order cumulant can be expressed as:
$\sum\limits_{l=0}^{p}{{{a}_{l}}{{C}_{kx}}(m-l,n)}=0$ (34)
The parameters of the AR model can be determined by solving the equation.
5. Implementation of AR-based Bi-spectrum Estimation
The AR-based bi-spectrum estimation can be implemented in four steps:
Step 1: Divide x(i) into several segments with N being the segment length, N=KM. Then, the three order cumulant of each data can be expressed as:
$C_{3x}^{(j)}(m,n)=\frac{1}{M}\sum\limits_{i={{k}_{1}}}^{{{k}_{2}}}{{{[{{x}^{(j)}}(i)]}^{\langle A\rangle }}{{[{{x}^{(j)}}(i+m)]}^{\langle B\rangle }}[{{x}^{(j)}}(i+n)}{{]}^{\langle C\rangle }}$ (35)
where k1=max{0,-m,-n}; k2=min{M,M-n, M-m}.
Step 2: calculate the mean value of the K segment, and estimate the three order cumulants [17] as:
${{C}_{3x}}(m,n)=\frac{1}{K}\sum\limits_{i=1}^{K}{\overset{\wedge }{\mathop{C_{3x}^{(j)}}}\,}(m,n)$ (36)
Step 3: Determine the parameters of the AR Model, al(l, 2, …,p);
Step 4: Estimate the bi-spectrum using the parameters determined in Step 3:
$\left\{ \begin{align} & \overset{\wedge }{\mathop{B}}\,({{\omega }_{1}},{{\omega }_{2}})={{\overset{\wedge }{\mathop{\gamma }}\,}_{3e}}\overset{\wedge }{\mathop{H}}\,({{\omega }_{1}})\overset{\wedge }{\mathop{H}}\,({{\omega }_{2}}){{\overset{\wedge }{\mathop{H}}\,}^{*}}({{\omega }_{1}}+{{\omega }_{2}}) \\ & \overset{\wedge }{\mathop{H}}\,(\omega )={{[1+\sum\limits_{l=1}^{p}{\overset{\wedge }{\mathop{{{a}_{l}}}}\,}\exp (-j\omega n)]}^{-1}},\left| \omega \right|\le \pi \\ & \overset{\wedge }{\mathop{{{\gamma }_{3e}}}}\,=E\left\lfloor {{e}^{3}}(i) \right\rfloor \\\end{align} \right.$ (37)
The proposed method was verified through comparative tests on the following sequence:
$x(n)=s(n)+v(n)$ (38)
where n is an integer in the interval [0, N-1], with N being the sequence length; s(n)=cos(2πf1n)+cos(2πf2n), with of f1=0.2 and f2=0.4; v(n) is a symmetrically distributed noise.
(a) Contour map of FLO bi-spectrum direct estimation method
(b) Contour map of FLO bi-spectrum indirect estimation method
(c) Contour map of FLO bi-spectrum estimation based on the AR model
Figure 1. Contour maps of the three methods
(a) 3D graph of FLO bi-spectrum direct estimation method
(b) 3D graph of FLO bi-spectrum indirect estimation method
(c) 3D graph of FLO bi-spectrum estimation based on the AR model
Figure 2. 3D graphs of the three methods
During the tests, the mixed noise ratio was set to -20dB, and the value of a to 1.5. The proposed method, the traditional nonparametric direct estimation method and the nonparametric indirect estimation method were all applied to the tests. The test results are displayed in Figures 1 and 2. As shown in Figures 1 and 2, the proposed method outperformed the traditional methods in signal identification and background noise suppression. In addition, our method managed to perverse the amplitude and phase information of the signal excellently, and achieve the best spectral flatness, laying the basis for the optimal whitening effect.
In practice, most background noises belong to the FLO, which cannot be suppressed well by traditional methods. In this paper, a FLO bi-spectrum estimation method is proposed based on the AR model, and new definitions are given for FLO three order cumulant. The author discussed the determination of AR model parameters, and introduced how to implement the bi-spectrum estimation method based on AR model. Then, a series of tests were performed to verify the correctness of our method. The results show that our method outperformed the traditional approaches in suppressing FLO noise and identifying relevant information of signals.
This work is supported by the Research Foundation of health department of Jiangxi Province (Grant No.20183016).
[1] Zha DF, Gao XY. (2006). Adaptive mixed norm filtering algorithm based on SαSG noise model. Journal on Communications 27(7): 1-6. https://doi.org/10.1016/j.dsp.2006.01.002
[2] Long JB, Wang HB. (2016). Parameter estimation and time-frequency distribution of fractional lower order time-frequency auto-regressive moving average model algorithm based on SaS process. Journal of Electronics & Information Technology 38(7): 1710-1716. http://dx.doi.org/10.11999/JEIT151066
[3] Spiridonakos MD, Fassois SD. (2014). Non-stationary random vibration modelling and analysis via functional series time-dependent ARMA (FS-TARMA) models – A critical survey. Mechanical Systems & Signal Processing 47(1): 175-224. https://doi.org/10.1016/j.ymssp.2013.06.024
[4] Madhavan N, Vinod AP, Madhukumar AS, Krishna AK. (2013). Spectrum sensing and modulation classification for cognitive radios using cumulants based on fractional lower order statistics. AEUE - International Journal of Electronics and Communications 67(6): 479-490. http://dx.doi.org/10.1016/j.aeue.2012.11.004
[5] Wang SY, Zhu XB, Li XT, Wang YL. (2007). A -spectrum estimation for AR SaS processes based on FLOC. Acta Electronica Sinica 35(9): 1637-1641. http://dx.chinadoi.cn/10.3321/j.issn:0372-2112.2007.09.004
[6] Qiu TS, Wang HY, Sun YM. (2004). A fractional lower-order covariance based adaptive latency change detection for evoked potentials. Acta Electronica Sinica 32(1): 91-95. http://dx.chinadoi.cn/10.3321/j.issn:0372-2112.2004.01.022
[7] Sun YM, Qiu TS, Wang FZ. (2003). Time delay estimation method of Non-stationary signals based on fractional lower spectrogram. Journal of Dalian Jiaotong University 37(3): 112-116.
[8] Liu JQ, Feng DZ. (2003). Blind sources separation in impulse noise. Journal of Electronics and Information Technology 31(12): 1921-1923.
[9] Feng Z, Liang M. (2013). Recent advances in time–frequency analysis methods for machinery fault diagnosis: A review with application examples. Mechanical Systems & Signal Processing 38(1): 165-205. https://doi.org/10.1016/j.ymssp.2013.01.017
[10] Liu GH, Zhang JJ. (2017). Fractional lower order cyclic spectrum analysis of digital frequency shift keying signals under the alpha stable distribution noise. Chinese Journal of Radio Science 32(1): 65-72. http://dx.chinadoi.cn/10.13443/j.cjors.2017011001
[11] Zha DF, Qiu TS. (2005). Generalized whitening method based on fractional order spectrum in the frequency domain. Journal of China Institute of Communications 26(5): 24-30.
[12] Shao M, Nikias CL. (1993). Signal processing with fractional lower order moments: stable processes and their applications. Proceedings of the IEEE 81(7): 986-1010. http://dx.doi.org/10.1109/5.231338
[13] Miller G. (1978). Properties of symmetric stable distribution. Journal of multivariate Analysis (8): 346-360.
[14] Xu B, Cui Y, Zhou UY. (2014). Unsupervised speckle level estimation of SAR images using texture analysis and AR model. IEICE Transactions on Communications 97(3): 691-698.
[15] Zhao N, Richard YF, Sun HJ. (2015). Interference alignment with delayed channel state information and dynamic AR model channel prediction in wireless networks. Wireless Net-works 21(4): 1227-1242. https://doi.org/10.1007/s11276-014-0850-7
[16] Cambanis S, Miller G. (1981). Linear problems in order and stable peicesses. SIAM Journal on Applied Mathematics 41(1): 43-69.
[17] Jiang YJ, Liu GQ, Wang TJ. (2017). Application of atomic decomposition algorithm based on sparse representation in AR model parameters estimation. Computer Science 44(5): 42-47. | CommonCrawl |
Asymptotic behavior of random fitzhugh-nagumo systems driven by colored noise
Dynamics for the damped wave equations on time-dependent domains
June 2018, 23(4): 1675-1688. doi: 10.3934/dcdsb.2018069
Boundedness and global solvability to a chemotaxis-haptotaxis model with slow and fast diffusion
Chunhua Jin ,
School of Mathematical Sciences, South China Normal University, Guangzhou, 510631, China
* Corresponding author: Chunhua Jin
Received June 2017 Revised August 2017 Published June 2018 Early access January 2018
Fund Project: The author is supported by NSFC(11471127), Guangdong Natural Science Funds for Distinguished Young Scholar (2015A030306029).
In this paper, we deal with the following coupled chemotaxis-haptotaxis system modeling cancer invasionwith nonlinear diffusion,
$\left\{ \begin{array}{l}{u_t} = \Delta {u^m} - \chi \nabla \cdot \left( {u \cdot \nabla v} \right) - \xi \nabla \cdot \left( {u \cdot \nabla w} \right) + \mu u\left( {1 - u - w} \right),{\rm{in}}\;\Omega \times {{\mathbb{R}}^ + },\\{v_t} - \nabla v + v = u,\;{\rm{in}}\;\Omega \times {{\mathbb{R}}^ + },\\{w_t} = - vw,\;\;{\rm{in}}\;\Omega \times {{\mathbb{R}}^ + },\end{array} \right.$
$Ω\subset\mathbb R^N$
$N≥ 3$
) is a bounded domain. Under zero-flux boundary conditions, we showed that for any
$m>0$
, the problem admits a global bounded weak solution for any large initial datum if
$\frac{χ}{μ}$
is appropriately small. The slow diffusion case (
) of this problem have been studied by many authors [14,7,19,23], in which, the boundedness and the global in time solution are established for
$m>\frac{2N}{N+2}$
, but the cases
$m≤ \frac{2N}{N+2}$
remain open.
Keywords: Chemotaxis-haptotaxis, slow diffusion, fast diffusion, global weak solution, boundedness.
Mathematics Subject Classification: Primary: 92C17, 35B65; Secondary: 35M10.
Citation: Chunhua Jin. Boundedness and global solvability to a chemotaxis-haptotaxis model with slow and fast diffusion. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1675-1688. doi: 10.3934/dcdsb.2018069
X. Cao, Boundedness in a three-dimensional chemotaxis-haptotaxis model, Z. Angew. Math. Phys. , 67 (2016), Art. 11, 13 pp. Google Scholar
M. A. J. Chaplain and G. Lolas, Mathematical modelling of cancer invasion of tissue: Dynamic heterogeneity, Networks and Heterogeneous Media, 1 (2006), 399-439. doi: 10.3934/nhm.2006.1.399. Google Scholar
T. Cieslak and C. Stinner, Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions, J. Differential Equations, 252 (2012), 5832-5851. doi: 10.1016/j.jde.2012.01.045. Google Scholar
D. Horstmann and G. Wang, Blow-up in a chemotaxis model without symmetry assumptions, European J. Appl. Math., 12 (2001), 159-177. Google Scholar
C. Jin, Boundedness and global solvability to a chemotaxis model with nonlinear diffusion, J. Differential Equations, 263 (2017), 5759-5772. doi: 10.1016/j.jde.2017.06.034. Google Scholar
E. Keller and A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theoret. Biol., 26 (1970), 399-415. Available from: https://www.researchgate.net/publication/17711401_Initiation_of_Slime_Mold_Aggregation_Viewed_as_an_Instability. doi: 10.1016/0022-5193(70)90092-5. Google Scholar
Y. Li and J. Lankeit, Boundedness in a chemotaxis-haptotaxis model with nonlinear diffusion, Nonlinearity, 29 (2016), 1564-1595. doi: 10.1088/0951-7715/29/5/1564. Google Scholar
L. Nirenberg, On elliptic partial differential equations, Ann. Scuola Norm. Sup. Pisa, 13 (1959), 115-162. Google Scholar
Y. Sugiyama, Time global existence and asymptotic behavior of solutions to degenerate quasilinear parabolic sys-tems of chemotaxis, Differential Integral Equations, 20 (2007), 133-180. Google Scholar
Z. Szymanska, C. Morales-Rodrigo, M. Lachowicz and M. Chaplain, Mathematical modelling of cancer invasion of tissue: The role and effect of nonlocal interactions, Math. Models Methods Appl. Sci., 19 (2009), 257-281. doi: 10.1142/S0218202509003425. Google Scholar
T. Senba and T. Suzuki, Parabolic system of chemotaxis: Blowup in a finite and the infinite time, Methods Appl. Anal., 8 (2001), 349-367. doi: 10.4310/MAA.2001.v8.n2.a9. Google Scholar
C. Stinner, C. Surulescu and M. Winkler, Global weak solutions in a PDE-ODE system modeling multiscale cancer cell invasion, SIAM J. Math. Anal., 46 (2014), 1969-2007. doi: 10.1137/13094058X. Google Scholar
Y. Tao, Boundedness in a two-dimensional chemotaxis-haptotaxis system, arXiv: 1407.7382v1. Google Scholar
Y. Tao and M. Winkler, A chemotaxis-haptotaxis model: The roles of nonlinear diffusion and logistic source, SIAM J. Math. Anal., 43 (2011), 685-704. doi: 10.1137/100802943. Google Scholar
Y. Tao and M. Winkler, Global existence and boundedness in a Keller-Segel-Stokes model with arbitrary porous medium diffusion, Discrete Contin. Dyn. Syst., 32 (2012), 1901-1914. doi: 10.3934/dcds.2012.32.1901. Google Scholar
Y. Tao and M. Winkler, Large time behavior in a multidimensional chemotaxis-haptotaxis model with slow signal diffusion, SIAM J. Math. Anal., 47 (2015), 4229-4250. doi: 10.1137/15M1014115. Google Scholar
J. Tello and M. Winkler, A chemotaxis system with logistic source, Comm. Partial Differential Equations, 32 (2007), 849-877. doi: 10.1080/03605300701319003. Google Scholar
M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl., 100 (2013), 748-767. doi: 10.1016/j.matpur.2013.01.020. Google Scholar
Y. Wang, Boundedness in the higher-dimensional chemotaxis-haptotaxis model with nonlinear diffusion, J. Differential Equations, 260 (2016), 1975-1989. doi: 10.1016/j.jde.2015.09.051. Google Scholar
M. Winkler, Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model, J. Differential Equations, 248 (2010), 2889-2905. doi: 10.1016/j.jde.2010.02.008. Google Scholar
M. Winkler, Chemotaxis with logistic source: Very weak global solutions and their boundedness properties, J.Math. Anal. Appl., 48 (2008), 708-729. doi: 10.1016/j.jmaa.2008.07.071. Google Scholar
M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, Comm. Partial Differential Equations, 35 (2010), 1516-1537. doi: 10.1080/03605300903473426. Google Scholar
J. Zheng, Boundedness of solutions to a quasilinear higher-dimensional chemotaxis-haptotaxis model with nonlinear diffusion, Discrete Contin. Dyn. Syst., 37 (2017), 627-643. Google Scholar
Pan Zheng, Chunlai Mu, Xiaojun Song. On the boundedness and decay of solutions for a chemotaxis-haptotaxis system with nonlinear diffusion. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1737-1757. doi: 10.3934/dcds.2016.36.1737
Pan Zheng. Global boundedness and decay for a multi-dimensional chemotaxis-haptotaxis system with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 2039-2056. doi: 10.3934/dcdsb.2016035
Ling Liu, Jiashan Zheng. Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3357-3377. doi: 10.3934/dcdsb.2018324
Jiashan Zheng. Boundedness of solutions to a quasilinear higher-dimensional chemotaxis-haptotaxis model with nonlinear diffusion. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 627-643. doi: 10.3934/dcds.2017026
Langhao Zhou, Liangwei Wang, Chunhua Jin. Global solvability to a singular chemotaxis-consumption model with fast and slow diffusion and logistic source. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021122
Hua Zhong, Chunlai Mu, Ke Lin. Global weak solution and boundedness in a three-dimensional competing chemotaxis. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 3875-3898. doi: 10.3934/dcds.2018168
Youshan Tao, Michael Winkler. A chemotaxis-haptotaxis system with haptoattractant remodeling: Boundedness enforced by mild saturation of signal production. Communications on Pure & Applied Analysis, 2019, 18 (4) : 2047-2067. doi: 10.3934/cpaa.2019092
Feng Li, Yuxiang Li. Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5409-5436. doi: 10.3934/dcdsb.2019064
Chungang Shi, Wei Wang, Dafeng Chen. Weak time discretization for slow-fast stochastic reaction-diffusion equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6285-6310. doi: 10.3934/dcdsb.2021019
Marcel Freitag. The fast signal diffusion limit in nonlinear chemotaxis systems. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 1109-1128. doi: 10.3934/dcdsb.2019211
Masaki Kurokiba, Toshitaka Nagai, T. Ogawa. The uniform boundedness and threshold for the global existence of the radial solution to a drift-diffusion system. Communications on Pure & Applied Analysis, 2006, 5 (1) : 97-106. doi: 10.3934/cpaa.2006.5.97
Marcel Freitag. Global existence and boundedness in a chemorepulsion system with superlinear diffusion. Discrete & Continuous Dynamical Systems, 2018, 38 (11) : 5943-5961. doi: 10.3934/dcds.2018258
Laiqing Meng, Jia Yuan, Xiaoxin Zheng. Global existence of almost energy solution to the two-dimensional chemotaxis-Navier-Stokes equations with partial diffusion. Discrete & Continuous Dynamical Systems, 2019, 39 (6) : 3413-3441. doi: 10.3934/dcds.2019141
Kin Ming Hui, Jinwan Park. Asymptotic behaviour of singular solution of the fast diffusion equation in the punctured euclidean space. Discrete & Continuous Dynamical Systems, 2021, 41 (11) : 5473-5508. doi: 10.3934/dcds.2021085
Alexandre Caboussat, Allison Leonard. Numerical solution and fast-slow decomposition of a population of weakly coupled systems. Conference Publications, 2009, 2009 (Special) : 123-132. doi: 10.3934/proc.2009.2009.123
Zhi-An Wang, Kun Zhao. Global dynamics and diffusion limit of a one-dimensional repulsive chemotaxis model. Communications on Pure & Applied Analysis, 2013, 12 (6) : 3027-3046. doi: 10.3934/cpaa.2013.12.3027
Hui Zhao, Zhengrong Liu, Yiren Chen. Global dynamics of a chemotaxis model with signal-dependent diffusion and sensitivity. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6155-6171. doi: 10.3934/dcdsb.2021011
Shu-Yu Hsu. Super fast vanishing solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems, 2020, 40 (9) : 5383-5414. doi: 10.3934/dcds.2020232
Meng-Xue Chang, Bang-Sheng Han, Xiao-Ming Fan. Global dynamics of the solution for a bistable reaction diffusion equation with nonlocal effect. Electronic Research Archive, 2021, 29 (5) : 3017-3030. doi: 10.3934/era.2021024
Chunhua Jin | CommonCrawl |
OALib Journal
OALib PrePrints
My Lib
Follow Us+
LinkedIn (OALib Journal)
LinkedIn (Open Access Library)
Title Keywords Abstract Author All
Publish in OALib Journal
APC: Only $99
Search Results: 1 - 10 of 24 matches for " Abliz Ablimit "
All listed articles are free for downloading (OA Articles)
Page 1 /24
Display every page 5 10 20 Item
Status of Asiatic Wild Cat and its habitat in Xinjiang Tarim Basin, China [PDF]
Ablimit Abdukadir, Babar Khan
Open Journal of Ecology (OJE) , 2013, DOI: 10.4236/oje.2013.38063
The Asiatic Wild CatFelis silvestris ornataisregarded as "Least concerned" (LC) first, as "Vulnerable" (VU) and following "Endangered" (EN) and then "Critically endangered" (CR) species as finally and originally concentrative distribute in Xinjiang Tarim Basin region in northwest China. This paper provides comprehensive information on bio-morphology, habitat selectivity, environmental condition, habit, preyfeed source and every item of composition, and relationship among Wild Cat and domestic cat at presence investigated in 2004-2006 and 2011- 2013 as especial study. The paper also illustrated some dynamical statistics of wild cat' pelt collection by national trade from three prefectures in the last 40 years. Briefing the results of indication that large scale and continuous openup land for cotton in unplanned, exploit petroleum and natural gas, misapply water and destroy desert vegetation, poaching and killing prey objectives, etc., of them long-term effects on plants functional density and qualities of the cat surviving habitat and productivity, whereas pressures of increasing human population to fragile desert ecosystem showed efficiency and desirable effects of the cat surviving.
Save Related Articles
Asymptotic Stability of the Dynamic Solution of an N-Unit Series System with Finite Number of Vacations [PDF]
Abdugeni Osman, Abdukerim Haji, Askar Ablimit
Journal of Applied Mathematics and Physics (JAMP) , 2018, DOI: 10.4236/jamp.2018.611185
Abstract: We investigate an N-unit series system with finite number of vacations. By analyzing the spectral distribution of the system operator and taking into account the irreducibility of the semigroup generated by the system operator we prove that the dynamic solution converges strongly to the steady state solution. Thus we obtain asymptotic stability of the dynamic solution of the system.
Isolation and Identification of Metal Resistance Green Alga
金属离子抗性衣藻品系的分离筛选及其鉴定
DONG Zhi-Fang,Hasanjan Abdulla,Abliz Ablimit,Tursun Mamat,Gopur Mijit,
董志芳,艾山江·阿布都拉,阿布力孜·阿布力米提,图尔逊·买买提,吾甫尔·米吉提
植物科学学报 , 2010,
Abstract: A total of 62 green alga strains were isolated from the soils of Nanshan Mountain, Xinjiang. Used the blot method to characterize these metal resistance, the results indicated that XJU-3、XJU-28 and XJU-36 have resistance to 0. 1 mmol · L~(-1) Co~(2+) ;XJU-28 has resistance to 1 mmol·L~(-1) Zn~(2+) and Fe~(3+) , XJU-36 has resistance to 0. 05 mmol·L~(-1) Cu~(2+). Taxonomic evalua-tion of the three strains were investigated based on the morphology and the internal transcribed spacer (ITS) regions (including the 5. 8S). Based on morphological characteristics, the three strains were likely to Chlamydomonas. Phylogenetic reconstruction with the Neighbor-joining (NJ) method using sequences of ITS(including the 5. 8S) indicated that XJU-3 and XJU-28 are closed to Chlamydomonas zebra. XJU-36 is closed to Chlamydornonas petasua.
Formation of Binary Millisecond Pulsars by Accretion-Induced Collapse of White Dwarfs under Wind-Driven Evolution
Iminhaji Ablimit,Xiang-Dong Li
Physics , 2014, DOI: 10.1088/0004-637X/800/2/98
Abstract: Accretion-induced collapse of massive white dwarfs (WDs) has been proposed to be an important channel to form binary millisecond pulsars (MSPs). Recent investigations on thermal timescale mass transfer in WD binaries demonstrate that the resultant MSPs are likely to have relatively wide orbit periods ($\gtrsim 10$ days). Here we calculate the evolution of WD binaries taking into account the excited wind from the companion star induced by X-ray irradiation of the accreting WD, which may drive rapid mass transfer even when the companion star is less massive than the WD. This scenario can naturally explain the formation of the strong-field neutron star in the low-mass X-ray binary 4U 1822$-$37. After AIC the mass transfer resumes when the companion star refills its Roche lobe, and the neutron star is recycled due to mass accretion. A large fraction of the binaries will evolve to become binary MSPs with a He WD companion, with the orbital periods distributed between $\gtrsim 0.1$ day and $\lesssim 30$ days, while some of them may follow the cataclysmic variable-like evolution towards very short orbits. If we instead assume that the newborn neutron star appears as an MSP and part of its rotational energy is used to ablate its companion star, the binaries may also evolve to be the redback-like systems.
The orbital period evolution of the supersoft X-ray source CAL 87
Abstract: CAL 87 is one of the best known supersoft X-ray sources. However, the measured masses, orbital period and orbital period evolution of CAL 87 cannot be addressed by the standard thermal-timescale mass-transfer model for supersoft X-ray sources. In this work we explore the orbital evolution of CAL 87 with both analytic and numerical methods. We demonstrate that the characteristics mentioned above can be naturally accounted for by the excited-wind-driven mass-transfer model.
Effective Spatial Data Partitioning for Scalable Query Processing
Ablimit Aji,Vo Hoang,Fusheng Wang
Computer Science , 2015,
Abstract: Recently, MapReduce based spatial query systems have emerged as a cost effective and scalable solution to large scale spatial data processing and analytics. MapReduce based systems achieve massive scalability by partitioning the data and running query tasks on those partitions in parallel. Therefore, effective data partitioning is critical for task parallelization, load balancing, and directly affects system performance. However, several pitfalls of spatial data partitioning make this task particularly challenging. First, data skew is very common in spatial applications. To achieve best query performance, data skew need to be reduced. Second, spatial partitioning approaches generate boundary objects that cross multiple partitions, and add extra query processing overhead. Consequently, boundary objects need to be minimized. Third, the high computational complexity of spatial partitioning algorithms combined with massive amounts of data require an efficient approach for partitioning to achieve overall fast query response. In this paper, we provide a systematic evaluation of multiple spatial partitioning methods with a set of different partitioning strategies, and study their implications on the performance of MapReduce based spatial queries. We also study sampling based partitioning methods and their impact on queries, and propose several MapReduce based high performance spatial partitioning methods. The main objective of our work is to provide a comprehensive guidance for optimal spatial data partitioning to support scalable and fast spatial data processing in massively parallel data processing frameworks such as MapReduce. The algorithms developed in this work are open source and can be easily integrated into different high performance spatial data processing systems.
Study on fragmentation behavior of taxoids by tandem mass spectrometry
Abliz Zeper,Qicheng Fang,Xiaoting Liang,Mitsuo Takayama
Chinese Science Bulletin , 2000, DOI: 10.1007/BF02886172
Abstract: We have studied the fragmentation behavior of positive and negative ions of taxol and 6/8/6 type taxoids, the influence of different substituents on fragmentation, and the correlativity between fragmentation patterns and structure by MS/MS technique with different ionization methods such as FAB-MS, ESI-MS, etc. We have also investigated in detail the fragmentation of various molecular-related ions, such as [M+H]+, [M+Na]+ and [M-H] ions, and the formation pathways of characteristic fragment ions. It has been found that there exists some competing reaction between the loss of C-13 side chain and decomposition by loss of acetic acid. In addition, by comparing CID spectra obtained with low- and high-energy collision, it is seen that CID-MS/MS with low-energy collision is more suitable for the study of the structural analysis of small molecules and drug metabolites. The experimental results demonstrate that MS/MS spectra can reflect more effectively the slight difference of structure between the related compounds and their structural characteristic features than MS spectra, and can also provide a beneficial analysis basis for these kinds of compounds.
Zeper Abliz,FANG Qicheng,LIANG Xiaoting,Mitsuo Takayama,
科学通报(英文版) , 2000,
Abstract: We have studied the fragmentation behavior of positive and negative ions of taxol and 6/8/6 type taxoids, the influence of different substituents on fragmentation, and the correlativity between fragmentation patterns and structure by MS/MS technique with different ionization methods such as FAB-MS, ESI-MS, etc. We have also investigated in detail the fragmentation of various molecular-related ions, such as M+H]+, M+Na]+ and M-H] ions, and the formation pathways of characteristic fragment ions. It has been found that there exists some competing reaction between the loss of C-13 side chain and decomposition by loss of acetic acid. In addition, by comparing CID spectra obtained with low- and high-energy collision, it is seen that CID-MS/MS with low-energy collision is more suitable for the study of the structural analysis of small molecules and drug metabolites. The experimental results demonstrate that MS/MS spectra can reflect more effectively the slight difference of structure between the related compounds and their structural characteristic features than MS spectra, and can also provide a beneficial analysis basis for these kinds of compounds.
Optical Waveguide BTX Gas Sensor Based on Yttrium-Doped Lithium Iron Phosphate Thin Film
Patima Nizamidin,Abliz Yimit,Ismayil Nurulla,Kiminori Itoh
ISRN Spectroscopy , 2012, DOI: 10.5402/2012/606317
Abstract: Yttrium-doped LiFePO4 powder was synthesized using the hydrothermal method in one step and was used as a sensing material. An optical waveguide (OWG) sensor based on Yttrium-doped LiFePO4 has been developed by spin coating a thin film of LiFe0.99Y0.01PO4 onto a single-mode Tin-diffused glass optical waveguide. Light was coupled into and out of glass OWG employed by a pair of prisms. The guided wave transmits in waveguide layer and passes through the film as an evanescent wave. The sensing film is stable in air, but when exposed to target gas at room temperature, its optical properties such as transmittance (T) and refractive index ( ) were changed; thus, the transmitted light intensity was changed. The LiFe0.99Y0.01PO4 thin film OWG exhibits reversible response to xylene gas in the range of 0.1–1000?ppm. When the concentration of BTX gases was lower than 1ppm, other substances caused a little interference with the detection of xylene vapor. Compared to pure LiFePO4 thin film OWG, this sensor exhibited higher sensitivity to BTXs. 1. Introduction Benzene, toluene, and xylene (BTX) are volatile organic compounds (VOCs) of great social and environmental significance, are widely used in industry, and can present serious medical, environmental, and explosion dangers [1]. BTX is also classified as a human carcinogen and is a risk factor for leukemia and lymphomas. The regulated standard concentration of benzene is 1.0?ppb (3?μg/m3) in Japan. The guidelines for the upper indoor concentration limits of toluene and xylene are 70?ppb (260?μg/m3) and 200?ppb (870?μg/m3), respectively [2]. Because of BTX's acute toxicities, there has been an increasing need for highly sensitive, rapidly responding, portable devices for monitoring trace levels of them in various environmental and industrial applications. Many works have been done on sensitivity to BTX such as electric noses [3, 4], chromatography [5], and electrochemical sensor [6], and these detectors are accurate, yet bulky and expensive, and require higher operating temperature. In comparison, the optical waveguide (OWG) sensors [7–9] are small in size, of high sensitivity, of fast response time, monitored at room temperature, and of intrinsically safe detection. Furthermore, they suffer little or no interference in the waveguide element of the sensor and can be made at a very low cost. A simple planar OWG consists of a substrate, a thin top layer (waveguide layer) with a refractive index greater than that of the substrate and the covering material (usually air) [8]. Single-mode Tin-diffused glass waveguide has a
Various correlations in a Heisenberg XXZ spin chain both in thermal equilibrium and under the intrinsic decoherence
Jiang-Tao Cai,Ahmad Abliz,Shu-Shen Li
Physics , 2011, DOI: 10.1007/s10773-012-1362-9
Abstract: In this paper we discuss various correlations measured by the concurrence (C), classical correlation (CC), quantum discord (QD), and geometric measure of discord (GMD) in a two-qubit Heisenberg XXZ spin chain in the presence of external magnetic field and Dzyaloshinskii-Moriya (DM) anisotropic antisymmetric interaction. Based on the analytically derived expressions for the correlations for the cases of thermal equilibrium and the inclusion of intrinsic decoherence, we discuss and compare the effects of various system parameters on the correlations in different cases. The results show that the anisotropy Jz is considerably crucial for the correlations in thermal equilibrium at zero temperature limit but ineffective under the consideration of the intrinsic decoherence, and these quantities decrease as temperature T rises on the whole. Besides, J turned out to be constructive, but B be detrimental in the manipulation and control of various quantities both in thermal equilibrium and under the intrinsic decoherence which can be avoided by tuning other system parameters, while D is constructive in thermal equilibrium, but destructive in the case of intrinsic decoherence in general. In addition, for the initial state $|\Psi_1(0) > = \frac{1}{\sqrt{2}} (|01 > + |10 >)$, all the correlations except the CC, exhibit a damping oscillation to a stable value larger than zero following the time, while for the initial state $|\Psi_2(0) > = \frac{1}{\sqrt{2}} (|00 > + |11 >)$, all the correlations monotonously decrease, but CC still remains maximum. Moreover, there is not a definite ordering of these quantities in thermal equilibrium, whereas there is a descending order of the CC, C, GMD and QD under the intrinsic decoherence with a nonnull B when the initial state is $|\Psi_2(0) >$.
Copyright © 2008-2017 Open Access Library. All rights reserved. | CommonCrawl |
Dade isometry
In mathematical finite group theory, the Dade isometry is an isometry from class function on a subgroup H with support on a subset K of H to class functions on a group G (Collins 1990, 6.1). It was introduced by Dade (1964) as a generalization and simplification of an isometry used by Feit & Thompson (1963) in their proof of the odd order theorem, and was used by Peterfalvi (2000) in his revision of the character theory of the odd order theorem.
Definitions
Suppose that H is a subgroup of a finite group G, K is an invariant subset of H such that if two elements in K are conjugate in G, then they are conjugate in H, and π a set of primes containing all prime divisors of the orders of elements of K. The Dade lifting is a linear map f → fσ from class functions f of H with support on K to class functions fσ of G, which is defined as follows: fσ(x) is f(k) if there is an element k ∈ K conjugate to the π-part of x, and 0 otherwise. The Dade lifting is an isometry if for each k ∈ K, the centralizer CG(k) is the semidirect product of a normal Hall π' subgroup I(K) with CH(k).
Tamely embedded subsets in the Feit–Thompson proof
The Feit–Thompson proof of the odd-order theorem uses "tamely embedded subsets" and an isometry from class functions with support on a tamely embedded subset. If K1 is a tamely embedded subset, then the subset K consisting of K1 without the identity element 1 satisfies the conditions above, and in this case the isometry used by Feit and Thompson is the Dade isometry.
References
• Collins, Michael J. (1990), Representations and characters of finite groups, Cambridge Studies in Advanced Mathematics, vol. 22, Cambridge University Press, ISBN 978-0-521-23440-5, MR 1050762
• Dade, Everett C. (1964), "Lifting group characters", Annals of Mathematics, Second Series, 79 (3): 590–596, doi:10.2307/1970409, ISSN 0003-486X, JSTOR 1970409, MR 0160813
• Feit, Walter (1967), Characters of finite groups, W. A. Benjamin, Inc., New York-Amsterdam, ISBN 9780805324341, MR 0219636
• Feit, Walter; Thompson, John G. (1963), "Solvability of groups of odd order", Pacific Journal of Mathematics, 13: 775–1029, doi:10.2140/pjm.1963.13.775, ISSN 0030-8730, MR 0166261
• Peterfalvi, Thomas (2000), Character theory for the odd order theorem, London Mathematical Society Lecture Note Series, vol. 272, Cambridge University Press, doi:10.1017/CBO9780511565861, ISBN 978-0-521-64660-4, MR 1747393
| Wikipedia |
Anderson's theorem
In mathematics, Anderson's theorem is a result in real analysis and geometry which says that the integral of an integrable, symmetric, unimodal, non-negative function f over an n-dimensional convex body K does not decrease if K is translated inwards towards the origin. This is a natural statement, since the graph of f can be thought of as a hill with a single peak over the origin; however, for n ≥ 2, the proof is not entirely obvious, as there may be points x of the body K where the value f(x) is larger than at the corresponding translate of x.
Anderson's theorem, named after Theodore Wilbur Anderson, also has an interesting application to probability theory.
Statement of the theorem
Let K be a convex body in n-dimensional Euclidean space Rn that is symmetric with respect to reflection in the origin, i.e. K = −K. Let f : Rn → R be a non-negative, symmetric, globally integrable function; i.e.
• f(x) ≥ 0 for all x ∈ Rn;
• f(x) = f(−x) for all x ∈ Rn;
• $\int _{\mathbb {R} ^{n}}f(x)\,\mathrm {d} x<+\infty .$
Suppose also that the super-level sets L(f, t) of f, defined by
$L(f,t)=\{x\in \mathbb {R} ^{n}|f(x)\geq t\},$
are convex subsets of Rn for every t ≥ 0. (This property is sometimes referred to as being unimodal.) Then, for any 0 ≤ c ≤ 1 and y ∈ Rn,
$\int _{K}f(x+cy)\,\mathrm {d} x\geq \int _{K}f(x+y)\,\mathrm {d} x.$
Application to probability theory
Given a probability space (Ω, Σ, Pr), suppose that X : Ω → Rn is an Rn-valued random variable with probability density function f : Rn → [0, +∞) and that Y : Ω → Rn is an independent random variable. The probability density functions of many well-known probability distributions are p-concave for some p, and hence unimodal. If they are also symmetric (e.g. the Laplace and normal distributions), then Anderson's theorem applies, in which case
$\Pr(X\in K)\geq \Pr(X+Y\in K)$
for any origin-symmetric convex body K ⊆ Rn.
References
• Gardner, Richard J. (2002). "The Brunn-Minkowski inequality". Bull. Amer. Math. Soc. (N.S.). 39 (3): 355–405 (electronic). doi:10.1090/S0273-0979-02-00941-2.
| Wikipedia |
\begin{document}
\title{A Multivariate Graphical Stochastic Volatility Model} \author{Yuan Cheng and Alex Lenkoski\footnote{\noindent \textit{Corresponding author address:} Alex Lenkoski, Institute of Applied Mathematics, Heidelberg University, Im Neuenheimer Feld 294, 69120 Heidelberg, Germany \newline{E-mail: [email protected]}}\\ \textit{Heidelberg University, Germany}} \maketitle \begin{abstract} \noindent The Gaussian Graphical Model (GGM) is a popular tool for incorporating sparsity into joint multivariate distributions. The G-Wishart distribution, a conjugate prior for precision matrices satisfying general GGM constraints, has now been in existence for over a decade. However, due to the lack of a direct sampler, its use has been limited in hierarchical Bayesian contexts, relegating mixing over the class of GGMs mostly to situations involving standard Gaussian likelihoods. Recent work, however, has developed methods that couple model and parameter moves, first through reversible jump methods and later by direct evaluation of conditional Bayes factors and subsequent resampling. Further, methods for avoiding prior normalizing constant calculations--a serious bottleneck and source of numerical instability--have been proposed. We review and clarify these developments and then propose a new methodology for GGM comparison that blends many recent themes. Theoretical developments and computational timing experiments reveal an algorithm that has limited computational demands and dramatically improves on computing times of existing methods. We conclude by developing a parsimonious multivariate stochastic volatility model that embeds GGM uncertainty in a larger hierarchical framework. The method is shown to be capable of adapting to the extreme swings in market volatility experienced in 2008 after the collapse of Lehman Brothers, offering considerable improvement in posterior predictive distribution calibration.
\end{abstract}
\section{Introduction} \indent The Gaussian graphical model (GGM) has received widespread consideration \citep[see][]{jones_et_2005} and estimators obeying graphical constraints in standard Gaussian sampling were proposed as early as \citet{dempster_1972}. Initial incorporation of GGMs in Bayesian estimation largely focus on decomposable graphs \citep{dawid_lauritzen_1993}, since prior distributions factorize into products of Wishart distributions. \citet{roverato_2002} proposes a generalized extension of the Hyper-Inverse Wishart distribution for covariance matrices $\boldsymbol{\Sigma}$ over arbitrary graphs. \citet{atay-kayis_massam_2005} turn this into a prior specified for precision matrices $\boldsymbol{K}$ and outline a Monte Carlo (MC) method that enables pairwise model comparisons. Following \citet{letac_massam_2007} and \citet{rajaratnam_et_2008}, \citet{lenkoski_dobra_2011} term this distribution the G-Wishart, and propose computational improvements to direct model comparison and model search.\\ \indent A number of samplers for precision matrices under a G-Wishart distribution have been proposed. These involve either block Gibbs sampling \citep{piccioni_2000}, Metropolis-Hastings (MH) moves \citep{mitsakakis_et_2011,dobra_lenkoski_2011, dobra_et_2011}, or rejection sampling \citep{wang_carvalho_2010}. \citet{dobra_et_2011} shows that the rejection sampler of \citet{wang_carvalho_2010} suffers from extremely low acceptance probabilities in even moderate dimensions. \citet{wang_li_2012} conclusively show that block Gibbs sampling is both computationally more efficient and exhibits considerably less autocorrelation than the MH methods.\\ \indent The block Gibbs sampler provides a Markov chain Monte Carlo (MCMC) sample. When the likelihood assumes standard Gaussian sampling, determining posterior expectations of $\boldsymbol{K}$ can technically be performed as in \citet{lenkoski_dobra_2011}, whereby model probabilities are first directly assessed via stochastic search, and model averaged samples are then collected using block Gibbs over each model. However, when the GGM is specified over latent data in a hierarchical Bayesian framework, such an approach is no longer valid. This is due to the use of the matrix $\boldsymbol{K}$ in updating other hyperparameters as well as its involvement in updating the latent Gaussian factors.\\ \indent \citet{dobra_lenkoski_2011} propose a reversible jump MCMC (which for brevity we refer to as RJ) method \citep{green_1995} that simultaneously updates the GGM and its associated $\boldsymbol{K}$, and embed the GGM in a semiparametric Gaussian copula. \citet{dobra_et_2011} expand the RJ algorithm and show how GGMs may be used to model dependent random effects in a generalized linear model context, focusing on lattice data. \citet{wang_li_2012} utilize conditional properties of G-Wishart variates that enables model moves through calculation of a conditional Bayes factor (CBF) \citep{dickey_gunel_1978} and subsequently update $\boldsymbol{K}$ through direct Gibbs sampling. \citet{wang_li_2012} also explore the use of a double MH algorithm \citep{liang_2010} to avoid the computationally expensive and numerically unstable MC approximation of normalizing constants proposed by \citet{atay-kayis_massam_2005}.\\ \indent We investigate an alternative method for simultaneously updating the GGM and associated $\boldsymbol{K}$ in hierarchical Bayesian settings. Our method is built on the framework outlined in \citet{wang_li_2012}, but blends several of the developments above to yield an algorithm with considerably less computational cost. We show that use of a CBF on the Cholesky decomposition of a permuted version of $\boldsymbol{K}$, originally proposed in an early version of \citet{wang_li_2012}, enables fast model moves; use of methods for sparse Cholesky decompositions \citep{rue_2001} further reduce computational overhead. We then show that while both \citet{dobra_et_2011} and \citet{wang_li_2012} indicate the random walk MH sampler of \citet{mitsakakis_et_2011} is not suitable for posterior sampling, it is especially useful in the context of the double MH algorithm. We are therefore able to specify a model move algorithm which requires little computational effort and exhibits no numerical instability.\\ \indent Simulation experiments compare our new algorithm to the algorithm of \citet{wang_li_2012} (which we refer to as WL). Both methods perform equally well at determining posterior distributions. However, we show that while the WL approach is theoretically appealing, it suffers significant computational overhead on account of many matrix inversions. By contrast, our new approach exhibits a dramatic improvement in computation time.\\ \indent We conclude with an example of how GGMs may be embedded in hierarchical Bayesian models. GGMs have been shown to yield parsimonious joint distributions useful in financial applications \citep{carvalho_west_2007, rodriguez_et_2011}. However, existing studies have largely ignored heteroskedasticity in financial data and relied on datasets taken over periods with relatively little financial turmoil. To address this, we propose a parsimonious multivariate stochastic volatility model that incorporates GGM uncertainty. We then model stock returns for $20$ assets during the period surrounding the financial crash of 2008. We show that in the periods leading up to the crash, and 9 months after the most turbulent period, this method yields no improvement over an approach that does not model heteroskedasticity. However, during the period of heightened volatility, our new model is able to adapt quickly and yields considerably more calibrated predictive distributions.\\ \indent The article is organized as follows. In Section~\ref{sec:gwish} we review the G-Wishart distribution, establish results necessary for CBF calculations and describe the block Gibbs sampler. Section~\ref{sec:simulation} conducts a simulation study showing the computational advantage gained by our new algorithm. In Section~\ref{sec:vol} we describe our multivariate graphical stochastic volatility model and give results over the data mentioned above. We conclude in Section~\ref{sec:conclude}. \section{The G-Wishart Distribution}\label{sec:gwish} \subsection{Review of Basic G-Wishart Properties} \indent Suppose that we collect data $\mc{D} = \{\boldsymbol{Z}^{(1)}, \dots, \boldsymbol{Z}^{(n)}\}$ such that $\boldsymbol{Z}^{(j)}\sim \mc{N}_p(0,\boldsymbol{K}^{-1})$ independently for $j \in \{1,\dots,n\}$, where $\boldsymbol{K}\in\bf{P}$, the space of $p\times p$ positive definite matrices. This sample has likelihood $$
pr(\mc{D}|\boldsymbol{K}) = (2\pi)^{-np/2}|\boldsymbol{K}|^{n/2} \exp\left(-\frac{1}{2}\inner{\boldsymbol{K},\boldsymbol{U}}\right), $$ where $\inner{A,B} = tr(A'B)$ denotes the trace inner product and $\boldsymbol{U} = \sum_{i = 1}^n \boldsymbol{Z}^{(i)}\boldsymbol{Z}^{(i)'}$.\\ \indent Further suppose that $G = (V,E)$ is a GGM where $V = \{1,\dots,p\}$ and $E \subset V\times V$. We will slightly abuse notation throughout, by writing $(i,j) \in G$ to indicate that the edge $(i,j)$ is in the edge set $E$. Associated with $G$ is a subspace $\bf{P}_G \subset \bf{P}$ such that $\boldsymbol{K}\in \bf{P}_G$ implies that $\boldsymbol{K}\in\bf{P}$ and $K_{ij} = 0$ whenever $(i,j) \not\in G$. The G-Wishart distribution \citep{roverato_2002,atay-kayis_massam_2005} $\mc{W}_G(\delta,\boldsymbol{D})$ assigns prior probability to $\boldsymbol{K}\in\bf{P}_G$ as $$
pr(\boldsymbol{K}|\delta,\boldsymbol{D},G) = \frac{1}{I_G(\delta,\boldsymbol{D})}|\boldsymbol{K}|^{(\delta - 2)/2}\left(-\frac{1}{2}\inner{\boldsymbol{K},\boldsymbol{D}}\right)\bs{1}_{\boldsymbol{K}\in\bf{P}_G}. $$
The normalizing constant $I_G$ is in general not known to have an explicit form, and \citet{atay-kayis_massam_2005} propose an MC approximation for this factor. Furthermore, the G-Wishart is conjugate and thus $pr(\boldsymbol{K}|\mc{D}, G) = \mc{W}_G(\delta + n, \boldsymbol{D}^*)$ where $\boldsymbol{D}^{*} = \boldsymbol{D} + \boldsymbol{U}$.\\ \indent Let $\bs{\Phi}$ be the upper triangular matrix such that $\bs{\Phi}'\bs{\Phi} = \boldsymbol{K}$, the Cholesky decomposition. \citet{rue_2001} notes that we may associate with $G$ another graph $F$, called the \emph{fill-in} graph, such that $G\subset F$, $\Phi_{ij} = 0$ when $(i,j)\notin F$ and \begin{equation}\label{eq:completephi} \Phi_{ij} = -\frac{1}{\Phi_{ii}}\sum_{l = 1}^{i}\Phi_{li}\Phi_{lj} \end{equation} when $i < j$ and $(i,j)\in F\setminus G$. \citet{rue_2001} outlines a straightforward method for constructing a graph $F$ from $G$ and explains how use of node reordering software can minimize fill-in.\\ \indent \citet{roverato_2002} shows that if $K\sim \mc{W}_G(\delta, \boldsymbol{D})$ then \begin{equation}\label{eq:phiwishart}
pr(\Phi|\delta,\boldsymbol{D},G) = \prod_{i = 1}^{p} \Phi_{ii}^{\delta + \nu_i^G - 1}\exp\left(-\frac{1}{2}\inner{\bs{\Phi}'\bs{\Phi},\boldsymbol{D}}\right), \end{equation} where $\nu_i^G$ is number of nodes in $\{i + 1, \dots, p\}$ that are connected to node $i$ in $G$. We especially note that if $\boldsymbol{K}\sim \mc{W}_G(\delta, \mathbb{I}_p)$, then \begin{equation}\label{eq:phiwishartiden}
pr(\bs{\Phi}|\delta,\mathbb{I}_p,G) = \exp\left(-\frac{1}{2}\sum_{(i,j)\in F} \Phi_{ij}^2\right)\prod_{i = 1}^{p} \Phi_{ii}^{\delta + \nu_i^G - 1}\exp\left(-\frac{1}{2} \Phi_{ii}^2\right). \end{equation} \subsection{Sampling Methods}\label{sec:gwish_samplers} \indent We review two MCMC methods for approximate sampling from a $\mc{W}_G(\delta,\boldsymbol{D})$. See \citet{dobra_et_2011} and \citet{wang_li_2012} for more detailed reviews of the many methods that have been proposed.\\ \indent Let $\mc{C}$ denote the cliques of $G$. In the following, we consider a clique to be a maximally complete subgraph, though \citet{wang_li_2012} note that this can be relaxed to any complete subgraph. \citet{piccioni_2000} shows that for any $C\in\mc{C}$, \begin{equation}\label{eq:bgibbs_update} K_C - K_{C,V\setminus C}K_{V\setminus C}^{-1}K_{V\setminus C,C} \sim \mc{W}(\delta, D_C), \end{equation} where $\mc{W}$ denotes a standard Wishart variate. The expression (\ref{eq:bgibbs_update}) thereby gives the full conditionals of $\mc{W}_G(\delta,\boldsymbol{D})$. The block Gibbs sampler thus cycles through $\mc{C}$, updating each component using (\ref{eq:bgibbs_update}). \citet{wang_li_2012} convincingly show that for posterior inference of $\mc{W}_{G}(\delta + n, \boldsymbol{D}^{*})$ the block Gibbs sampler outperforms all other proposed methods, both in terms of computing time and mixing. The authors also provide a useful review of the algorithm and indicate its broad flexibility. Throughout, we use the block Gibbs sampler for updating the matrix $\boldsymbol{K}$ in the posterior.\\ \indent Both \citet{dobra_et_2011} and \citet{wang_li_2012} show that the random walk MH (RWMH) algorithm of \citet{mitsakakis_et_2011} is unsuitable for posterior inference. However, we show below that it is especially effective to use in the double MH algorithm. In particular, suppose that we wish to sample from $\mathcal{W}_G(\delta, \mathbb{I}_p)$ and $\boldsymbol{K}$ is the current state of an MCMC chain. Then the RWMH algorithm performs the following \begin{itemize} \item[1.] Determine $\bs{\Phi}$ from $\boldsymbol{K}$ \item[2.] Propose $\bs{\Psi}$: \begin{itemize} \item[a.] Sample $c\sim \chi^2_{\delta + \nu_i^G}$ and set $\Psi_{ii} = c^{1/2}$ \item[b.] Sample $\Psi_{ij} \sim \mc{N}(0,1)$ for $(i,j)\in G$ \item[c.] Complete $\bs{\Psi}$ using (\ref{eq:completephi}) for all $(i,j) \in F\setminus G, i < j$ \end{itemize} \item[3.] Compute \begin{equation}\label{eq:rwmh_update} \alpha = \exp\left(-\frac{1}{2}\sum_{(i,j)\in F\setminus G}(\Psi_{ij}^2 - \Phi_{ij}^2)\right) \end{equation} \item[4.] With probability $\min\{\alpha,1\}$ set $\boldsymbol{K} = \bs{\Psi}'\bs{\Psi}$ \end{itemize} The appeal of the RWMH algorithm in sampling from $\mc{W}_G(\delta,\mathbb{I}_p)$ is the simplicity of the factor in (\ref{eq:rwmh_update}). Through the use of node reordering software, which minimizes the size of $F\setminus G$, this expression may require few calculations. While the algorithm does not perform well in the posterior, and the calculation $(\ref{eq:rwmh_update})$ as well as step (2.c) become more involved when $\boldsymbol{D} \neq \mathbb{I}_p$ we show below that in the particular case of the double MH algorithm using the prior $\mc{W}_G(\delta,\mathbb{I}_p)$, this method is extremely useful. \subsection{Conditional Bayes Factors}\label{sec:cbfs} \indent Prior to \citet{wang_li_2012}, model moves between two graphs $G$ and $G'$ focused on approximating the ratio \begin{equation}\label{eq:bf}
\frac{pr(G|\mc{D})}{pr(G'|\mc{D})} = \frac{pr(\mc{D}|G)}{pr(\mc{D}|G')}\times \frac{pr(G)}{pr(G')}, \end{equation} first through MC \citep{atay-kayis_massam_2005,jones_et_2005}, then a combination of MC and Laplace approximation \citep{lenkoski_dobra_2011} and ultimately through RJ \citep{dobra_lenkoski_2011,dobra_et_2011}.\\ \indent Suppose that $G\subset G'$ which differ only by the edge $e = (i,j)\in G'$ and that $\boldsymbol{K}\in\bf{P}_G$. Let $\boldsymbol{K}^{-e} = \boldsymbol{K}\setminus\{K_{ij},K_{ji},K_{jj}\}$. In lieu of (\ref{eq:bf}), \citet{wang_li_2012} consider ratios of the form \begin{equation}\label{eq:cbf_k}
\frac{pr(G|\boldsymbol{K}^{-e},\mc{D})}{pr(G'|\boldsymbol{K}^{-e},\mc{D})} = \frac{pr(\mc{D},\boldsymbol{K}^{-e}|G)}{pr(\mc{D},\boldsymbol{K}^{-e}|G')}\times\frac{pr(G)}{pr(G')} \end{equation} which are related to the conditional Bayes factors (CBFs) of \cite{dickey_gunel_1978}.\\ \indent Using properties related to the form (\ref{eq:bgibbs_update}) \citet{wang_li_2012} show that \begin{equation}\label{eq:wang_li_cbf}
\frac{pr(\mc{D},\boldsymbol{K}^{-e}|G)}{pr(\mc{D},\boldsymbol{K}^{-e}|G')} = H(\delta + n,e,\boldsymbol{K}^{-e},\boldsymbol{D}^{*}) \frac{I_G(\delta,\boldsymbol{D})}{I_{G'}(\delta,\boldsymbol{D})} \end{equation} where, in general $$
H(d,e,\boldsymbol{K}^{-e},\boldsymbol{S}) = \frac{I(d, S_{jj})}{J(d, S_{ee}, A_{11})}\left(\frac{|K^{0}_{V\setminus j}|}{|K^{1}_{V\setminus e}|}\right)^{(d - 2)/2}\exp\left(-\frac{1}{2}\inner{\boldsymbol{S}, K^{0} - K^{1}}\right) $$ where $I(b,c) = c^{-b/2}2^{b/2}\Gamma(b/2)$, $$ J(h,\boldsymbol{B},b) = \left(\frac{2\pi}{B_{22}}\right)^{1/2}b^{\frac{(h-1)}{2}}I(h,B_{22})\exp\left(-\frac{b}{2}\left[B_{11}-\frac{B_{12}^2}{B_{22}}\right]\right), $$ such that $\boldsymbol{A} = \boldsymbol{K}_{ee} - \boldsymbol{K}_{e,V\setminus e} \boldsymbol{K}^{-1}_{V\setminus e} \boldsymbol{K}_{e,V\setminus e}$. The matrix $\boldsymbol{K}^{0}$ is equal to $\boldsymbol{K}$ except that $K^{0}_{jj} = \boldsymbol{K}_{j,V\setminus j} \boldsymbol{K}^{-1}_{V\setminus j} \boldsymbol{K}_{j,V\setminus j}$ and $K^{0}_{ij}=K^{0}_{ji} = 0$. Finally, the matrix $\boldsymbol{K}^{1}$ is equal to $\boldsymbol{K}$ except that $\boldsymbol{K}^{1}_{e} = \boldsymbol{K}_{e,V\setminus e} \boldsymbol{K}^{-1}_{V\setminus e} \boldsymbol{K}_{e,V\setminus e}$.\\ \indent By using the CBF in (\ref{eq:wang_li_cbf}), \citet{wang_li_2012} propose model moves that do not rely on RJ methods, and after assessing which graph to move to, the parameter $K_{jj}$, as well as $K_{ij}$ if $e$ is in the accepted graph, are resampled according to their conditional distributions given $\boldsymbol{K}^{-e}$. This method is appealing, as it offers an automatic manner of moving between graphs and does not rely on the tuning parameters used in the RJ methods of \citet{dobra_lenkoski_2011} and \citet{dobra_et_2011}.\\ \indent While the result has significant theoretical appeal we show that computation of the factor $H(\delta + n,e,\boldsymbol{K}^{-e},\boldsymbol{D}^{*})$ is extremely costly, even in low dimensions. This is due to the formation of the matrices $\boldsymbol{K}^{0}$ and $\boldsymbol{K}^{1}$, which require the solution of systems involving large matrices, in particular, $\boldsymbol{K}_{V\setminus j}$ and $\boldsymbol{K}_{V\setminus e}$.\\\ \indent Suppose now that $G$ and $G'$ differ only by the edge $f = (p - 1,p)$ again with $G\subset G'$. We consider the CBF $$
\frac{pr(\mc{D},\bs{\Phi}^{-f}|G')}{pr(\mc{D},\bs{\Phi}^{-f}|G)}, $$ where $\bs{\Phi}^{-f} = \bs{\Phi}\setminus\{\Phi_{p-1,p},\Phi_{pp}\}$. In Appendix A we show that \begin{equation}\label{eq:cl_update}
\frac{pr(\mc{D}|\bs{\Phi}^{-f}, G')}{pr(\mc{D}|\bs{\Phi}^{-f},G)} = N(\bs{\Phi}^{-f},\boldsymbol{D}^{*})\frac{I_{G}(\delta,D)}{I_{G'}(\delta,D)}, \end{equation} with, in general $$ N(\bs{\Phi}^{-f},\boldsymbol{S}) = \Phi_{p-1,p-1}\left(\frac{2\pi}{S_{pp}}\right)^{1/2}\exp\left(\frac{1}{2} S_{pp} (\phi_0 - \mu )^2\right) $$ where $\mu = \Phi_{p-1,p-1} S_{p-1,p}/S_{pp}$, and $\phi_0 = -\Phi_{p-1,p-1}^{-1}\sum_{l=1}^{p-2}\Phi_{lp-1}\Phi_{lp}$.\\ \indent This result originally appeared in an early version of \citet{wang_li_2012}. In order to update a general edge $e$, we propose determining a permutation $\vartheta$ of $V$ such that the nodes of $V\setminus e$ are reordered to reduce the fill-in of the graph $G_{V\setminus e}$ and finally, the edge $e$ is placed in the $(p - 1,p)$ position. Equation (\ref{eq:cl_update}) is then calculated after permuting, $\boldsymbol{K}$ and $\boldsymbol{D}^{*}$ according to $\vartheta$.\\ \indent The benefit of this method is the reduced computational overhead required to compute (\ref{eq:cl_update}). The method requires relabeling the matrices $\boldsymbol{K}$ and $\boldsymbol{D}^{*}$ and determining the Cholesky decomposition of the permuted version of $\boldsymbol{K}$. Using node reordering software to minimize fill-in of $G_{V\setminus e}$ proves useful in the developments below. \subsection{Avoiding Normalizing Constant Calculation}\label{sec:dmh} \indent Both (\ref{eq:wang_li_cbf}) and (\ref{eq:cl_update}) require determination of the prior normalizing constants $I_G$ and $I_{G'}$. While the MC method of \citet{atay-kayis_massam_2005} enables these factors to be approximated, the routine can be subject to numerical instability \citep{lenkoski_dobra_2011,wang_li_2012} and involves significant computational effort.\\ \indent \citet{wang_li_2012} propose a method for avoiding the use of the MC approximation for prior normalizing constants. Their method employs the double Metropolis Hastings algorithm of \citet{liang_2010}, which is an extension of the exchange algorithm developed by \citet{murray_et_2006}.\\ \indent We briefly review the implementation of the double MH algorithm in \citet{wang_li_2012}. Suppose that $(K,G)$ is the current state of the MCMC chain and we propose to move to $G'$ by adding the edge $e$ to $G$. The double MH algorithm then forms a copy $\tilde{\boldsymbol{K}}$ of $\boldsymbol{K}$, resamples $\tilde{K}_{ij}$ and $\tilde{K}_{jj}$ according to $G'$. It then updates $\tilde{\boldsymbol{K}}$ via block Gibbs according to $G'$. Equation (\ref{eq:wang_li_cbf}) is then replaced with \begin{equation}\label{eq:wang_li_dmh} \frac{H(\delta + n, e,\boldsymbol{K}^{-e},\boldsymbol{D}^{*})}{H(\delta, e,\tilde{\boldsymbol{K}}^{-e},\boldsymbol{D})} \end{equation} We see that the expression (\ref{eq:wang_li_dmh}) has replaced the prior normalizing constants with an evaluation of $H$ in the prior, evaluated at $\tilde{K}$ \citep[see][for theoretical justifications of this procedure]{murray_et_2006,liang_2010}. This is clearly beneficial, as it avoids the need for involved MC approximation. Unfortunately, the procedure as implemented in \citet{wang_li_2012} requires a full run of the block Gibbs sampler, as well as determination of the matrices $\boldsymbol{K}^{0}$ and $\boldsymbol{K}^{1}$ and therefore contains many large matrix operations.\\ \indent We propose an alternative implementation of the double MH algorithm. Again suppose that $(\boldsymbol{K},G)$ is the current state and we propose to move to $G'$ by adding the edge $f = (p-1,p)$. We first determine $\bs{\Phi}$ from $\boldsymbol{K}$. We then update $\bs{\Phi}$ to $\tilde{\bs{\Phi}}$ using the RWMH algorithm in Section~\ref{sec:gwish_samplers} relative to $G'$. Equation (\ref{eq:cl_update}) is then replaced with \begin{equation}\label{eq:cl_dmh} \frac{N(\bs{\Phi}^{-f},\boldsymbol{D}^{*})}{N(\tilde{\bs{\Phi}}^{-f},\boldsymbol{D})} \end{equation} \indent We can immediately see that the expression (\ref{eq:cl_dmh}) is considerably simpler that (\ref{eq:wang_li_dmh}); it requires no additional matrix inversions nor the evaluation of any trace inner products. Furthermore, the generation of the auxiliary variables through the RWMH is considerably less demanding computationally than the use of the Block Gibbs sampler, especially when $\boldsymbol{D} = \mathbb{I}_p$, a common setting in practice. \subsection{Algorithms for Full Posterior Determination}\label{sec:algo} In this section we outline the two algorithms we will consider for full posterior determination. Both algorithms create a sequence $\{(\boldsymbol{K}^{[1]},G^{[1]}),\dots, (\boldsymbol{K}^{[S]},G^{[S]})\}$ where $\boldsymbol{K}^{[s]} \in \bf{P}_{G^{[s]}}$. Given the current state $(\boldsymbol{K}^{[s]},G^{[s]})$ the WL algorithm proceeds as follows \begin{itemize} \item[0.] Set $\boldsymbol{K} = K^{[s]}$ and $G = G^{[s]}$ \item[1.] For each edge $e$, do: \begin{itemize} \item[a.] if $e \notin G$ attempt to update $G$ to $G' = G\cup e$ with probability $$
\frac{q(G'|\boldsymbol{K}^{-e},\mathcal{D})}{q(G|\boldsymbol{K}^{-e},\mathcal{D})} = \frac{pr(G')H(\delta + n,e,\boldsymbol{K}^{-e},D^{*})}{pr(G)} $$ if $e \in G$ the ratio is flipped. If $G$ is not to be updated, skip to step c. \item[b.] If attempting to update $G$ to $G'$, sample $\tilde{\boldsymbol{K}}$ as discussed in Section~\ref{sec:dmh} and calculate $$ \alpha = \min\{1, H^{-1}(\delta,e,\tilde{\boldsymbol{K}}, \boldsymbol{D})\} $$ if $e\in G'$, otherwise calculate $$ \alpha = \min\{1, H(\delta,e,\tilde{\boldsymbol{K}}, \boldsymbol{D})\} $$ and with probability $\alpha$ set $G = G'$, otherwise leave it unchanged. \item[c.] Resample $K_{ij}, K_{jj}$ according to $G$. \end{itemize} After attempting to update each edge, set $G^{[s + 1]} = G$. \item[2.] Resample $\boldsymbol{K}^{[s + 1]}$ using the block Gibbs sampler relative to $G^{[s + 1]}$ and the current state of $\boldsymbol{K}$. \end{itemize} We see that in one iteration of the WL algorithm, each edge is potentially updated in the graph. Our new algorithm (which we call CL) also follows this idea, and proceeds as follows \begin{itemize} \item[0.] Set $\boldsymbol{K} = \boldsymbol{K}^{[s]}$ and $G = G^{[s]}$ \item[1.] For each edge $e$, do: \begin{itemize} \item[a.] Determine a permutation $\vartheta$ of $V_p$ as discussed in Section~\ref{sec:cbfs}, which places the edge $e$ in the $(p-1,p) = f$ position, and likewise permute $\boldsymbol{K}$, $G$, $\boldsymbol{D}$ and $\boldsymbol{D}^*$. Let $G^{\vartheta}$ denote the permuted version of $G$ and $\bs{\Phi}$ be the Cholesky decomposition of the permuted version of $\boldsymbol{K}$. If $f\notin G^{\vartheta}$ attempt to update $G^{\vartheta}$ to $G' = G^{\vartheta}\cup f$ with probability $$
\frac{q(G'|\bs{\Phi}^{-f},\mathcal{D})}{q(G^{\vartheta}|\bs{\Phi}^{-f},\mathcal{D})} = \frac{pr(G')N(\bs{\Phi}^{-f},D^{*})}{pr(G^{\vartheta})} $$ if $f\in G^{\vartheta}$ the ratio is flipped. If $G^{\vartheta}$ is not to be updated then, skip to step c. \item[b.] If attempting to update $G^{\vartheta}$ to $G'$, sample $\tilde{\bs{\Phi}}$ as discussed in Section~\ref{sec:dmh} and calculate $$ \alpha = \min\{1, N^{-1}(\tilde{\bs{\Phi}}^{-f}, \boldsymbol{D})\} $$ if $f\in G'$, otherwise calculate $$ \alpha = \min\{1, N(\tilde{\bs{\Phi}}^{-f},\boldsymbol{D})\} $$ and with probability $\alpha$ set $G^{\vartheta} = G'$, otherwise leave it unchanged. \item[c.] Resample $\Phi_{p-1,p}, \Phi_{pp}$ according to $G^{\vartheta}$. Then reform $\boldsymbol{K}$ and $G$ by unpermuting the system. \end{itemize} After attempting to update each edge, set $G^{[s + 1]} = G$. \item[2.] Resample $\boldsymbol{K}^{[s + 1]}$ using the block Gibbs sampler relative to $G^{[s + 1]}$ and the current state of $\boldsymbol{K}$. \end{itemize} As we can see, there is somewhat more bookkeeping involved in the implementation of the CL algorithm, as the system is constantly being permuted. However, the reduction in computation time by using the RWMH algorithm and requiring only the calculation of the factors $N(\bs{\Phi}^{-f},\boldsymbol{D}^{*})$ and $N(\tilde{\bs{\Phi}}^{-f},\boldsymbol{D})$ is dramatic, as we show below.\\ \section{Simulation Study}\label{sec:simulation} In this section we conduct a simulation study that compares the method we have developed to the WL algorithm. Our example comes directly from \citet{wang_li_2012}. We consider a situation in which $p = 6$ and let $\boldsymbol{U} = \boldsymbol{Y}\boldsymbol{Y}' = nA^{-1}$ where $n = 18$ and $A_{ii} = 1$ for $i = 1,\dots, 6; A_{i,i + 1} = A_{i + 1, i} = .5$ for $i = 1,\dots,5$ and $A_{16} = A_{61} = .4$. We finally assume the prior $\boldsymbol{K} \sim \mc{W}_G(3,\mathbb{I}_6)$. Using exhaustive MC approximation of the entire graph space, \citet{wang_li_2012} show that the posterior probability of each edge is $$
(p_{ij}|A) = \left(\begin{array}{cccccc} 1 & 0.969 & 0.106 & 0.085 & 0.113 & 0.85\\ 0.969 & 1 & 0.98 & 0.098 & 0.081 & 0.115\\ 0.106 & 0.98 & 1 & 0.982 & 0.0098 & 0.086\\ 0.085 & 0.098 & 0.982 & 1 & 0.98 & 0.106\\ 0.113 & 0.081 & 0.098 & 0.98 & 1 & 0.97\\ 0.85 & 0.115 & 0.086 & 0.106 & 0.97 & 1\\ \end{array} \right) $$ \indent We use this example and compare the CL algorithm to the WL algorithm. Following \citet{wang_li_2012} we run both the WL and CL algorithms as described in Section~\ref{sec:algo} for $60,000$ iterations and discard the first $10,000$ iterations as burn-in. Both algorithms were implemented in {\tt R}, though {\tt C++} was used for block-Gibbs updates. We note that if a pure {\tt R} implementation had been used, the time differences between WL and CL would be even more dramatic.\\ \indent We record the total computing time and looked at the mean squared errors of the posterior inclusion probabilities from the two runs compared with the true values given above. We repeated the entire process $100$ times, each time starting both WL and CL from the same random starting point. Table~\ref{tbl:wl_sim} shows the average computing time in seconds (on a 2.8 GHz desktop computer with 4GB of RAM running Linux), average MSE and standard deviations across the $100$ runs. The first column shows the expected result: even in six dimensions the WL algorithm takes more than 4 times as long to perform the same number of iterations as the CL algorithm. This shows the improved efficiency of the proposed method.\\ \begin{table} \caption{Comparison of CL and WL algorithms for the six dimensional example.}\label{tbl:wl_sim} \begin{center} \begin{tabular}{lcccc} \hline\hline &\multicolumn{2}{c}{Time (sec)}&\multicolumn{2}{c}{MSE}\\ &Mean&SD&Mean&SD\\ \hline CL & 182.5 & (4.1) &0.0088& (6e-04)\\ WL & 818.4 & (19.2) &0.0349& (0.0025)\\ \hline\hline \end{tabular} \end{center} \end{table} \indent We found the results in the third column surprising, but do not draw broad conclusions from it. It appears that in this example, using $60,000$ iterations, the CL algorithm approaches the true posterior edge expectation more quickly than the WL algorithm. Since both algorithms are correct theoretically, we choose not to emphasize this result. Furthermore, we have determined that by doubling the number of iterations, both approaches yield essentially the exact posterior distribution, though again the WL algorithm takes more than 4 times as long to run.\\ \indent This example was chosen as it appears in \citet{wang_li_2012} and has an exact answer. The fact that the CL algorithm is considerably faster than the WL approach even in 6-dimensions indicates the broader appeal for searching truly high dimensional spaces. \section{A Multivariate Graphical Stochastic Volatility Model}\label{sec:vol} \indent Modeling the joint distribution of returns for a large number of assets is an important component of portfolio allocation and risk management. \citet{carvalho_west_2007} and \citet{rodriguez_et_2011} both show that the use of GGMs can substantially improve modeling of joint asset returns. However, heterogeneity in asset returns was, at best, tangentially addressed. The study of \citet{carvalho_west_2007} considered a fixed (decomposable) graph throughout the entire period and assumed that asset returns were identically distributed. \citet{rodriguez_et_2011} allowed for mixing over the class of decomposable graphs and also introduced some heterosckedasticity by considering an infinite-dimensional hidden Markov model (iHMM). However, inside groups of observations in the iHMM, variances were assumed constant.\\ \indent Despite these constraints in model assumptions, both studies showed substantial improvements by incorporating GGMs into the precision matrix associated with joint asset returns. We consider a situation in which the notion of homoskedastic, normally distributed asset returns is simply untenable; namely, the period surrounding the financial turbulence associated with the collapse of Lehman Brothers. We show that by utilizing the developments in the previous sections, we are able to specify a parsimonious stochastic volatility model for multivariate assset returns that quickly adapts to changes in market volatility. This model shows the flexibility of the new approach in embedding the GGM in larger hierarchical Bayesian frameworks. \subsection{The stochastic volatility model}\label{sec:model} \indent Let $\boldsymbol{Y}_t$ be the log-returns of $p$ correlated assets. We specify the following hierarchical model for these returns \begin{align}
\boldsymbol{Y}_{t}|\boldsymbol{K},X_t &\sim \mc{N}_p(\bs{0}, \exp(X_t) \boldsymbol{K}^{-1})\label{eq:stoch_vol_lik}\\
X_t|\phi,X_{t - 1},\tau &\sim \mathcal{N}(\phi X_{t -1}, \tau^{-1}) \nonumber\\ \phi &\sim \mc{N}(0,\tau_0)\nonumber\\ \tau &\sim \Gamma(a,b)\nonumber\\
K|G &\sim \mc{W}_{G}(\delta,\boldsymbol{D})\nonumber\\ G &\sim pr(G)\nonumber. \end{align} \indent In the likelihood (\ref{eq:stoch_vol_lik}) we see that asset returns are assumed to be mean-zero. The $X_t$ terms then dictate an overall level of market volatility, while a constant precision parameter $\boldsymbol{K}$ dictates the degree to which asset returns are correlated. While this model is parsimonious, it serves as a useful first departure from previous studies as it explicitly incorporates notions of stochastic volatility. For purposes of identification, we set $X_0 = 0$. In the conclusions section we discuss further possible generalizations to this framework.\\ \indent After collecting a time-series of returns $\boldsymbol{Y}^{(1:T)}$, we then aim to determine the posterior distribution $$
pr(K,G,\tau,\bs{\phi},\boldsymbol{X}|\boldsymbol{Y}^{(1:T)}) $$
where $\boldsymbol{X} = (X_1,\dots,X_T)$. Furthermore, we may be interested in the posterior predictive distribution $pr(\boldsymbol{Y}^{(T + 1)}|\boldsymbol{Y}^{(1:T)})$. The parameters $\boldsymbol{X}, \phi, \tau$ are updated with standard block MH or Gibbs steps \citep[see][]{rue_held_2005} and the posterior predictive distribution is easily formed from these parameters. However, we note in particular that \begin{equation}
\boldsymbol{K}|G,\tau,\bs{\phi},\boldsymbol{X},\boldsymbol{Y}^{(1:T)}\sim \mc{W}_G\left(\delta + T, \boldsymbol{D} +\sum_{t = 1}^{T} \frac{\boldsymbol{Y}^{(t)}\boldsymbol{Y}^{(t)'}}{\exp(X_t)}\right)\label{eq:stoch_post}. \end{equation} From (\ref{eq:stoch_post}) we see why the developments in Section~\ref{sec:gwish} prove useful. We may update $\boldsymbol{K}$ and $G$ jointly using the CL algorithm discussed in Section~\ref{sec:algo} simply by setting $\boldsymbol{D}^{*} = \boldsymbol{D} + \sum_{t = 1}^{T} (\boldsymbol{Y}^{(t)}\boldsymbol{Y}^{(t)'})/\exp(X_t)$. This allows us to easily embed a sparse precision matrix $\boldsymbol{K}$ and mix over the class of GGMs in any hierarhical Bayesian model that involves a standard Wishart distribution. \subsection{The data}\label{sec:data} To apply our model and algorithm we randomly chose 20 stocks from the S\&P 500. These stocks were: Aetna Inc. (AET), CA Inc. (CA), Campbell Soup (CPB), CVS Caremark Corp. (CVS), Family Dollar Stores (FDO), Honeywell Int'l Inc. (HON), Hudson City Bancorp (HCBK), JDS Uniphase Corp. (JDSU), Johnson Controls (JCI), Morgan Stanley (MS), PPG Industries (PPG), Principal Financial Group (PFG), Sara Lee Corp. (SLE), Sempra Energy (SRE), Southern Co. (SO), Supervalu Inc. (SVU), Thermo Fisher Scientific (TMO), Wal-Mart Stores (WMT), Walt Disney Co. (DIS), Wellpoint Inc. (WLP).\\ \indent We chose a time period where markets experience both high and low volatility to evaluate the flexibility of our model. We chose the time period from October 31, 2001 to May 21, 2008 as our training period to fit our model and make predictions for the time period from May 22, 2008 to October 23, 2009. The time periods consist of 1650 and 360 trading days respectively. Figure~\ref{fig:square_returns} shows the mean of the squared returns for the these 20 securities over the entire dataset. The extreme volatility present in the markets after the collapse of Lehman brothers in September 2008 is readily evident, showing that a homoskedasticity assumption is untenable for these data. \begin{figure}
\caption{Mean of the squared returns taken over all 20 stocks during
the entire time period from October 31, 2001 to October 23, 2009.}
\label{fig:square_returns}
\end{figure} \subsection{Predictive Performance Results} We assess the relative performance of the stochastic volatility model we develop in Section~\ref{sec:model} versus a method that embedds GGMs, but does not have a stochastic volatility component. For each day $t+1$ in the forecast period we run our stochastic volatility model from the beginning of the training period until the previous day $t$ to obtain estimates for the model parameters. We run the algorithm for $60,000$ iterations, discarding the first $10,000$ as burn-in (keep in mind that in one iteration of the algorithm all edges are evaluated). Using the posterior sample, we obtain the posterior predictive distribution of $\boldsymbol{Y}^{(t + 1)}$.\\ \indent Figure~\ref{fig:vol_means} shows the mean of the posterior predictive distribution of the volatility component $X_{t + 1}$ using the returns up to time $t$, which drives the predictive distribution of $\boldsymbol{Y}^{(t + 1)}$ for each day in the forecast period. Comparing Figures~\ref{fig:square_returns} and \ref{fig:vol_means} we see that our model reflects the time-dependent volatility well. At the beginning when the market is quiet, $X_{t + 1}$ takes lower values mostly between 0 and 1. After the shock of the financial crisis the volatility in the market goes up extremely, which is reflected by significantly higher values of $X_{t + 1}$. Months later, towards the end of the forecast period, the market has cooled down and the terms $X_{t + 1}$ reflect this.\\ \begin{figure}
\caption{Means of posterior predictive distribution for the volatility component $X_{t +1}$ from May 22,
2008 to October 23, 2009.}
\label{fig:vol_means}
\end{figure} The two methods we compare both return full predictive distributions. By construction, these predictive distributions have the same mean and median since returns are assumed mean-zero. Judging their performance therefore requires assessing the entire predictive distribution. We assess their performance using the energy score introduced by \cite{gneiting_raftery_2007}. \\ \indent The energy score is a proper scoring rule, which is a multivariate generalization of the continuous ranked probability score. It is defined as \begin{equation} \label{eq:69}
ES(F,\mathbf{x})= \mathbf{E}_F||\mathbf{X}-\mathbf{x}||^{2} - \frac12 \mathbf{E}_F ||\mathbf{X}-\mathbf{X}'||^{2} \end{equation} where $F$ is our predictive distribution for a vector-valued quantity, $\mathbf{X}$ and $\mathbf{X}'$ are independent random variables with distribution $F$ and $\mathbf{x}$ is the realization.\\ \indent For each day in the forecast period, we compute the energy score for the predictive distribution returned by the two methods considered. Figure~\ref{fig:es_diff} shows the difference between the stochastic volatility model developed in Section~\ref{sec:model} and the model that incorporates GGM uncertainty but holds volatility fixed. As we can see in Figure~\ref{fig:es_diff}, between May and August, 2008, there is no clear difference between the two approaches. However, after the financial turbulence in September, 2008, the stochastic volatility model outperforms the fixed volatility model by a considerable margin. During almost every day in the turbulent period, the energy scores are lower under the stochastic volatility model. After the market turbulence subsides, the two models return to performing equally well again.\\ \indent This short example shows the utility of the computational methodology developed in this paper. The model is simple, in many respects, but a non-trivial deviation from the standard iid sampling framework to which the GGM was initially relegated. By now being able to embed the GGM in more complicated hierarchical frameworks, we are able to address difficult sampling schemes while simultaneously incorporating sparsity in the estimate of joint distributions. \begin{figure}
\caption{Difference of energy score from the predictive distribution of the model with stochastic volatility versus the model with fixed volatility. Values below zero indicate the stochastic volatility model outperformed the fixed volatility model.}
\label{fig:es_diff}
\end{figure}
\section{Conclusions}\label{sec:conclude} \indent We have synthesized a number of recent results related to the G-Wishart distribution. This has allowed for an algorithm that does not rely on RJ methods, obviates the need for expensive and numerically unstable MC approximation of prior normalizing constants and does so with minimal computational effort. The improvement in computation time is sufficient that at each stage of the algorithm, all edges may be evaluated for inclusion or exclusion in the graphical model. This algorithm allows the GGM to be embedded in more sophisticated hierarchical Bayesian models and opens the possibility of replacing standard Wishart distributions with G-Wishart variates, leveraging the improvement in predictive performance offered by sparse precision matrices.\\ \indent The applied example shows the usefulness of this combination. We are able to sparsely model the interactions in financial assets while simultaneously addressing the issues of stochastic volatility prevalent in markets undergoing turbulence. The method is able to characterize the distribution of asset returns during periods of rapidly fluctuating volatility much better than standard iid frameworks.\\ \indent The stochastic volatility model we develop remains parsimonious and several adjustments could be made. The first such development would be to replace the univariate term $X_t$ with a multivariate factor that allows the variance of each asset to follow its own path, while potentially tying the evolution of these factors together with a separate GGM. Furthermore, employing some form of the iHMM framework of \citet{rodriguez_et_2011} could allow for the matrix $\boldsymbol{K}$ to change throughout the period as well. Such developments will be considered in future work.\\
\section*{Appendix A: Determination of CBF for using $\bs{\Phi}^{-f}$} Consider $$
\frac{pr(\mc{D},\bs{\Phi}^{-f}|G')}{pr(\mc{D},\bs{\Phi}^{-f}|G)} $$ we note that $$
pr(\mc{D},\bs{\Phi}^{-f}|G') = \int_{\Phi_{p-1,p}}\int_{\Phi_{pp}} pr(\mc{D},\bs{\Phi}|G')d\bs{\Phi}_{f} $$ and $$
pr(\mc{D},\bs{\Phi}^{-f}|G) = \int_{\Phi_{pp}} pr(\mc{D},\bs{\Phi}|G)d\Phi_{pp} $$ up to common terms we thus have that $$
pr(\mc{D},\bs{\Phi}^{-f}|G') \propto \frac{\Phi_{p-1,p-1}}{I_{G'}(\delta,\boldsymbol{D})}\int_{\Phi_{p-1,p}} \exp\left(-\frac{1}{2} D^{*}_{p,p}(\Phi_{p - 1,p} + \mu)^2\right)d\Phi_{p-1,p} $$ recognizing the integral as the kernel of a normal distribution, this yields $$
pr(\mc{D},\bs{\Phi}^{-f}|G') \propto \frac{\Phi_{p-1,p-1}}{I_{G'}(\delta,\boldsymbol{D})}\left(\frac{2\pi}{D^{*}_{pp}}\right)^{1/2}. $$ Further, again up to common terms $$
pr(\mc{D},\bs{\Phi}^{-f}|G) \propto \frac{1}{I_{G}(\delta,\boldsymbol{D})}\exp\left(-\frac{1}{2} D^{*}_{p,p}(\Phi_{0} + \mu)^2\right) $$ and thus $$
\frac{pr(\mc{D},\bs{\Phi}^{-f}|G')}{pr(\mc{D},\bs{\Phi}^{-f}|G)} = N(\bs{\Phi}^{-f},D^{*})\frac{I_G(\delta,\boldsymbol{D})}{I_{G'}(\delta,\boldsymbol{D})} $$
\end{document} | arXiv |
Eta-invariant
From Encyclopedia of Mathematics
$ \eta $-invariant
Let $A$ be an unbounded self-adjoint operator with only pure point spectrum (cf. also Spectrum of an operator). Let $a _ { n }$ be the eigenvalues of $A$, counted with multiplicity. If $A$ is a first-order elliptic differential operator on a compact manifold, then $| a _ { n } | \rightarrow \infty$ and the series
\begin{equation*} \eta ( s ) = \sum _ { a _ { n } \neq 0 } \frac { a _ { n } } { | a _ { n } | } | a _ { n } | ^ { - s } \end{equation*}
is convergent for $\operatorname { Re } ( s )$ large enough. Moreover, $ \eta $ has a meromorphic continuation to the complex plane, with $s = 0$ a regular value (cf. also Analytic continuation). The value of $\eta _ { A }$ at $0$ is called the eta-invariant of $A$, and was introduced by M.F. Atiyah, V.K. Patodi and I.M. Singer in the foundational paper [a1] as a correction term for an index theorem on manifolds with boundary (cf. also Index formulas). For example, in that paper, they prove that the signature $\operatorname{sign}( M )$ of a compact, oriented, $4 k$-dimensional Riemannian manifold with boundary $M$ whose metric is a product metric near the boundary is
\begin{equation*} \operatorname { sign } ( M ) = \int _ { M } \mathcal{L} ( M , g ) - \eta _ { D } ( 0 ), \end{equation*}
where $D = \pm ( * d - d * )$ is the signature operator on the boundary and $\mathcal{L} ( M , g )$ the Hirzebruch $L$-polynomial associated to the Riemannian metric on $M$.
The definition of the eta-invariant was generalized by J.-M. Bismut and J. Cheeger in [a2], where they introduced the eta-form of a family of elliptic operators as above. It can be used to recover the eta-invariant of operators in the family.
[a1] M.F. Atiyah, V.K. Patodi, I.M. Singer, "Spectral asymmetry and Riemannian Geometry" Math. Proc. Cambridge Philos. Soc. , 77 (1975) pp. 43–69
[a2] J.-M. Bismut, J. Cheeger, "Eta invariants and their adiabatic limits" J. Amer. Math. Soc. , 2 : 1 (1989) pp. 33–77
Eta-invariant. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Eta-invariant&oldid=50253
This article was adapted from an original article by V. Nistor (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Eta-invariant&oldid=50253"
TeX semi-auto
TeX done | CommonCrawl |
\begin{document}
\begin{abstract} Let $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$ be the hyperspace of nonempty bounded closed subsets
of Euclidean space $\ensuremath{\mathbb R^m}$ endowed with the Hausdorff metric. It is well known that $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$ is homeomorphic to
the Hilbert cube minus a point. We prove that natural dense subspaces of $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$
of all nowhere dense closed sets, of all perfect sets, of all Cantor sets
and of all Lebesgue measure zero sets are homeomorphic to
the Hilbert space $\ell_2$. For each $0 \leqslant 1 < m$, let $$\nu^m_k
= \{x = (x_i)_{i=1}^m \in \ensuremath{\mathbb R^m} : x_i \in \ensuremath{\mathbb R}\setminus\mathbb{Q}
\text{ except for at most $k$ many $i$}\},$$ where $\nu^{2k+1}_k$ is the $k$-dimensional N{\"o}beling space
and $\nu^m_0 = (\ensuremath{\mathbb R}\setminus\mathbb{Q})^m$. It is also proved that the spaces $\operatorname{Bd}_H(\nu^1_0)$
and $\operatorname{Bd}_H(\nu^m_k)$, $0\leqslant k<m-1$, are homeomorphic to $\ell_2$. Moreover, we investigate the hyperspace $\operatorname{Cld}_H(\ensuremath{\mathbb R})$
of all nonempty closed subsets of the real line $\ensuremath{\mathbb R}$
with the Hausdorff (infinite-valued) metric. It is shown that a nonseparable component $\mathcal H$ of $\operatorname{Cld}_H(\ensuremath{\mathbb R})$ is homeomorphic
to the Hilbert space $\ell_2(2^{\aleph_0})$ of weight $2^{\aleph_0}$
in case where $\mathcal H \not\ni \ensuremath{\mathbb R}, [0,\infty), (-\infty,0]$. \end{abstract}
\title{Hausdorff hyperspaces of $\Rm$ and their dense subspaces}
\section*{Introduction}
In this paper,
we consider metric spaces and their hyperspaces
endowed with the Hausdorff metric. Specifically, given a metric space $X = \pair Xd$,
we shall denote by $\operatorname{Cld}(X)$ and $\operatorname{Bd}(X)$ the hyperspaces
consisting of all nonempty closed sets and
of all nonempty bounded closed sets in $X$ respectively
and by $d_H$ the Hausdorff metric,
which is infinite-valued on $\operatorname{Cld}(X)$ if $X$ is unbounded. We shall sometimes write $\operatorname{Cld}_H(X)$ or $\operatorname{Bd}_H(X)$ to emphasize
the fact that we consider this space with the Hausdorff metric topology.
A theorem of Antosiewicz and Cellina \cite{AnCe} states that,
given a convex set $X$ in a normed linear space,
every continuous multivalued map $\map \varphi{Y}{\operatorname{Bd}_H(X)}$
from a closed subset $Y$ of a metric space $Z$,
can be extended to a continuous map $\map{\overline f}Z{\operatorname{Bd}_H(X)}$. Using the language of topology,
this theorem says that, under the above assumptions,
$\operatorname{Bd}_H(X)$ is an absolute extensor or an absolute retract
(in the class of metric spaces). In \cite{CK}, it is proved that
the above result is still valid when $X$ is replaced
by a dense subset of a convex set in a normed linear space. More generally, $\operatorname{Bd}_H(X)$ is an absolute retract,
whenever the metric on $X$ is {\em almost convex}
(see \S\ref{whfijfpiapfi} for the definition). This condition was further weakened in \cite{KuSaY},
which has turned out to be actually a necessary and sufficient one
by Banakh and Voytsitskyy \cite{BaVo}. In the last paper, several equivalent conditions are given,
which are too technical to mention them here. We refer to \cite{BaVo} for the details.
It is a natural question whether $\operatorname{Bd}_H(X)$ and some of its natural subspaces
are homeomorphic to some standard spaces, like the Hilbert cube/space, etc. Since the Hausdorff metric topology coincides with the Vietoris topology
on the hyperspace $\exp(X)$ of nonempty compact sets,
the above question was already answered, applying known results,
in case where bounded closed sets in $X$ are compact. Among the known results,
let us mention the theorem of Curtis and Schori \cite{CuScho}
(cf.\ \cite[Chapter 8]{vMill}),
saying that $\exp(X)$ is homeomorphic to ($\approx$)
the Hilbert cube $\ensuremath{\operatorname{Q}} = [-1,1]^\omega$ if and only if $X$ is a Peano continuum,
that is, it is compact, connected and locally connected. Later, Curtis \cite{Curtis} characterized non-compact metric spaces $X$
for which $\exp(X)$ is homeomorphic to
the Hilbert cube minus a point $\ensuremath{\Q\setminus 0}$ ($= \ensuremath{\operatorname{Q}}\setminus\{0\}$)
or the pseudo-interior $\ensuremath{\operatorname{s}} = (-1,1)^\omega$ of $\ensuremath{\operatorname{Q}}$.\footnote
{It is well known that $\ensuremath{\operatorname{s}}$ is homeomorphic to
the separable Hilbert space $\ell_2$.} In particular, $\operatorname{Bd}_H(\ensuremath{\mathbb R^m}) = \exp(\ensuremath{\mathbb R^m}) \approx \ensuremath{\Q\setminus 0}$. For more information concerning Vietoris hyperspaces,
we refer to the book \cite{IlNa}.
The aim of this work is to study topological types
of some of the natural subspces of the Hausdorff hyperspace. We consider the following subspaces of $\operatorname{Bd}_H(X)$: \begin{itemize} \item
$\operatorname{Nwd}(X)$ --- all nowhere dense closed sets; \item
$\operatorname{Perf}(X)$ --- all perfect sets;\footnote
{I.e., completely metrizable closed sets which are dense in itself.} \item
$\operatorname{Cantor}(X)$ --- all compact sets homeomorphic to the Cantor set. \end{itemize} In case $X = \ensuremath{\mathbb R^m}$ with the standard metric, we can also consider the following subspace: \begin{itemize} \item
$\mathfrak N(\ensuremath{\mathbb R^m})$ --- all closed sets of the Lebesgue measure zero. \end{itemize} We show that, in case $X=\ensuremath{\mathbb R^m}$,
the above spaces are homeomorphic to the separable Hilbert space $\ell_2$. Actually, we prove that if $\mathcal{F}$ is one of the above spaces
then the pair $\pair{\operatorname{Bd}_H(\ensuremath{\mathbb R^m})}{\mathcal{F}}$ is homeomorphic to
$\pair\ensuremath{\Q\setminus 0}\ensuremath{\pseudoint\setminus0}$.
The completion of a metric space $X = \pair Xd$
is denoted by $\pair{\tilde X}d$. Then $\operatorname{Bd}_H(X,d)$ can be identified with the subspace of $\operatorname{Bd}_H(\tilde X,d)$,
via the isometric embedding $A\mapsto \operatorname{cl}_{\tilde X}A$. Thus we shall often write $\operatorname{Bd}(X,d)\subseteq \operatorname{Bd}(\tilde X,d)$,
having in mind this identification. In this case, $\operatorname{Bd}(\tilde X,d)$ is the completion of $\operatorname{Bd}(X,d)$. By such a reason, we also consider a dense subspace $D$
of a metric space $X = \pair Xd$. For each $0 \leqslant k < m$, let $$\nu^m_k
= \{x = (x_i)_{i=1}^m \in \ensuremath{\mathbb R^m} : x_i \in \ensuremath{\mathbb R}\setminus\mathbb{Q}
\text{ except for at most $k$ many $i$}\},$$
which is the universal space for completely metrizable subspaces
in $\ensuremath{\mathbb R^m}$ of $\dim \leqslant k$. In case $2k + 1 < m$,
$\nu^m_k$ is homeomorphic to the $k$-dimensional N{\"o}beling space
$\nu^{2k+1}_k$,
which is the universal space for all separable completely metrizable spaces. Note that $\nu^m_0 = (\ensuremath{\mathbb R}\setminus\mathbb{Q})^m\approx\ensuremath{\mathbb R}\setminus\mathbb{Q}$. We show that the pairs $\pair{\operatorname{Bd}(\ensuremath{\mathbb R})}{\operatorname{Bd}(\ensuremath{\mathbb R}\setminus\mathbb{Q})}$
and $\pair{\operatorname{Bd}(\ensuremath{\mathbb R^m})}{\operatorname{Bd}(\nu^m_k)}$, $0 \leqslant k < m-1$,
are homeomorphic to $\pair\ensuremath{\Q\setminus 0}\ensuremath{\pseudoint\setminus0}$,
so we have $\operatorname{Bd}_H(\nu^m_k) \approx \ell_2$
if $\pair mk = \pair 10$ or $0 \leqslant k < m-1$.
We also study the space $\operatorname{Cld}_H(\ensuremath{\mathbb R})$. It is very different from the hyperspace $\exp(\ensuremath{\mathbb R})$. It is not hard to see that $\operatorname{Cld}_H(\ensuremath{\mathbb R})$ has $2^{\aleph_0}$ many components, $\operatorname{Bd}(\ensuremath{\mathbb R})$ is the only separable one and any other component has weight $2^{\aleph_0}$. We show that a nonseparable component $\mathcal H$ of $\operatorname{Cld}_H(\ensuremath{\mathbb R})$ is homeomorphic to the Hilbert space $\ell_2(2^{\aleph_0})$ of weight $2^{\aleph_0}$ in case where $\mathcal H \not\ni \ensuremath{\mathbb R}, [0,\infty), (-\infty,0]$. This is a partial answer (in case $n = 1$) of Problem 4 in \cite{KuSaY}.
\section{Preliminaries}
We use standard notation concerning sets and topology. For example, we denote by $\omega$ the set of all natural numbers. Given a set $X$, we denote by $[X]^{<\omega}$ the family of all finite subsets of $X$.
Given a metric space $X = \pair Xd$ and a set $A\subseteq X$, we denote by $\operatorname{B}(A,r)$ and $\overline{\bal}(A,r)$ the open and the closed $r$-balls centered at $A$, that is, \begin{gather*} \operatorname{B}(A,r)=\setof{x\in X}{\operatorname{dist}(x,A)<r} \quad\text{and}\\
\overline{\bal}(A,r)=\setof{x\in X}{\operatorname{dist}(x,A)\leqslant r}. \end{gather*} The Hausdorff metric $d_H$ on $\operatorname{Cld}(X)$ is defined as follows: $$d_H(A,C) = \inf \setof{r>0}{A\subseteq\operatorname{B}(C,r)\text{ and }C\subseteq\operatorname{B}(A,r)},$$ where $d_H$ is actually a metric on $\operatorname{Bd}(X)$ but $d_H$ is infinite-valued for $\operatorname{Cld}(X)$ if $\pair Xd$ is unbounded. The spaces $\operatorname{Cld}_H(X)$ and $\operatorname{Bd}_H(X)$ are sometimes denoted by $\operatorname{Cld}_H(X,d)$ and $\operatorname{Bd}_H(X,d)$, to emphasize the fact that they are determined by the metric on $X$. In fact, the metric $\varrho(x,y) = d(x,y)/(1 + d(x,y))$ induces the same topology on $X$ as $d$ but the Hausdorff metric $\varrho_H$ induces a different one on $\operatorname{Cld}(X)$. On the other hand, the Hausdorff metric induced by the metric $\bar d(x,y) = \min\{d(x,y), 1\}$ is finite-valued and induces the same topology on $\operatorname{Cld}_H(X)$ as $d_H$; moreover $\operatorname{Cld}(X)$ is equal to $\operatorname{Bd}(X)$ as sets. Note that the subspace $\operatorname{Fin}(X)=\fin X\setminus\sn\emptyset$ of $\operatorname{Bd}_H(X)$ of all nonempty finite subsets of $X$ is dense in $\operatorname{Bd}_H(X)$ if and only if every bounded set in $X = \pair Xd$ is totally bounded.
\begin{fact}\label{complete-separable} For a metric space $X = \pair Xd$, the following hold: \begin{romanenume} \item If $d$ is complete then $\pair{\operatorname{Bd}(X,d)}{d_H}$ is a complete metric space
and the space $\operatorname{Cld}_H(X)$ is completely metrizable. \item The space $\operatorname{Bd}_H(X,d)$ is separable
if and only if every bounded set in $X$ is totally bounded. \end{romanenume} \end{fact}
We use the standard notation $\exp(X)$ for the Vietoris hyperspace of nonempty compact sets in $X$. Note that $\exp(X)\subseteq\operatorname{Bd}(X)$ for every metric space $X = \pair Xd$ and it is well known that the Hausdorff metric induces the Vietoris topology on $\exp(X)$. However, if closed bounded sets of $X$ are not compact, then the space $\operatorname{Bd}_H(X)$ is very different from $\operatorname{Bd}_V(X)$ endowed with the Vietoris topology. We use the following notation: $$A^-=\setof{C\in \operatorname{Cld}(X)}{C\cap A\ne\emptyset} \quad\text{and}\quad A^+=\setof{C\in \operatorname{Cld}(X)}{C\subseteq A},$$ where $A\subseteq X$. When dealing with $\operatorname{Bd}(X)$ (or other subspace of $\operatorname{Cld}(X)$), we still write $A^-$ and $A^+$ instead of $A^-\cap \operatorname{Bd}(X)$ and $A^+\cap \operatorname{Bd}(X)$ respectively.
In the rest of this section, we shall give preliminary results of infinite-dimensional topology. For the details, we refer to the book \cite{BRZ}. We abbreviate ``absolute neighborhood retract'' to ``ANR''.
Let $X = \pair Xd$ be a metric space. It is said that a map $\map fYX$ can be {\em approximated} by maps in a class $\mathcal{F}$ of maps if for every map $\map \alpha X{(0,1)}$ there exists a map $\map gYX$ which belongs to $\mathcal{F}$ and such that $d(f(y),g(y))<\alpha(f(y))$ for every $y\in Y$. A closed subset $A \subseteq X$ is a {\em \ensuremath{\operatorname{Z}}-set} in $X$ if the identity map $\operatorname{id}_X$ of $X$ can be approximated by maps $\map fXX$ such that $\img fX\cap A=\emptyset$. Strengthening the last condition to $\operatorname{cl}_X(\img fX)\cap A=\emptyset$, we define the notion of a {\em strong \ensuremath{\operatorname{Z}}-set}. In case $X$ is locally compact, every \ensuremath{\operatorname{Z}}-set in $X$ is a strong \ensuremath{\operatorname{Z}}-set. Moreover, it is well known that every \ensuremath{\operatorname{Z}}-set in an $\ell_2$-manifold is a strong \ensuremath{\operatorname{Z}}-set. A countable union of (strong) \ensuremath{\operatorname{Z}}-sets is called a ({\em strong}) {\em \ensuremath{\Z_\sigma}-set}. We call $X$ a ({\em strong}) {\em \ensuremath{\Z_\sigma}-space} if it is a (strong) \ensuremath{\Z_\sigma}-set in itself. An embedding $\map fXY$ is called a {\em \ensuremath{\operatorname{Z}}-embedding} if $\img fX$ is a \ensuremath{\operatorname{Z}}-set in $Y$.
It is said that $D\subseteq X$ is {\em homotopy dense} in $X$ if there exists a homotopy $\map h{X\times[0,1]}X$ such that $h_0 = \operatorname{id}$ and $\img{h_t}X\subseteq D$ for every $t > 0$, where $h_t(x)=h(x,t)$. The complement of a homotopy dense subset of $X$ is said to be {\em homotopy negligible}. If $A\subseteq X$ is a homotopy negligible closed set then $A$ is a \ensuremath{\operatorname{Z}}-set in $X$.
\begin{fact}\label{Z-set} For a closed set $A$ in an ANR $X$,
the following are equivalent: \begin{alphenume} \item
$A$ is a \ensuremath{\operatorname{Z}}-set in $X$; \item
each map $\map f{[0,1]^n}X$, $n\in\nat$,
can be approximated by maps into $X\setminus A$; \item
$A$ is homotopy negligible in $X$. \end{alphenume} \end{fact}
\begin{fact}\label{sedgfasf} Let $D$ be a homotopy dense subset of an ANR $X$. Then the following hold: \begin{romanenume} \item
$D$ is also an ANR. \item
A closed set $A\subseteq X$ is a \ensuremath{\operatorname{Z}}-set in $X$ if and only if $A\cap D$ is a \ensuremath{\operatorname{Z}}-set in $D$. \item
If $A\subseteq X$ is a strong \ensuremath{\operatorname{Z}}-set in $X$ then $A\cap D$ is a strong \ensuremath{\operatorname{Z}}-set in $D$. \end{romanenume} \end{fact}
\begin{prop}\label{wetafqwtrqf} Assume that $X$ is a homotopy dense subset of a \ensuremath{\operatorname{Q}}-manifold $M$. Then $X$ is an ANR and every \ensuremath{\operatorname{Z}}-set in $X$ is a strong \ensuremath{\operatorname{Z}}-set. Furthermore, $X$ is a strong \ensuremath{\Z_\sigma}-space if and only if $X$ is contained in a \ensuremath{\Z_\sigma}-set in $M$. \end{prop}
\begin{proof} We verify only the ``furthermore" statement. Assume $X\subseteq\bigcup_{n\in\omega}Z_n$, where each $Z_n$ is a \ensuremath{\operatorname{Z}}-set in $M$. Then each $Z_n$ is a strong \ensuremath{\operatorname{Z}}-set in $M$, because $M$ is locally compact, and therefore by Fact \ref{sedgfasf} (iii), each $Z_n\cap X$ is a strong \ensuremath{\operatorname{Z}}-set in $X$. Conversely, if $X=\bigcup_{n\in\omega}X_n$, where each $X_n$ is a (strong) \ensuremath{\operatorname{Z}}-set in $X$, then by Fact \ref{sedgfasf} (ii), $Z_n=\operatorname{cl}_{M}X_n$ is a \ensuremath{\operatorname{Z}}-set in $M$. Clearly, $X\subseteq\bigcup_{n\in\omega}Z_n$. \end{proof}
Let $\mathcal{C}$ be a topological class of spaces,
that is, if $X$ is homeomorphic to some $Y \in \mathcal{C}$ then $X$ also belongs to $\mathcal{C}$. It is said that $\mathcal{C}$ is {\em open} (resp.\ {\em closed}) {\rm hereditary} if $X\in\mathcal{C}$ whenever $X$ is an open (resp.\ closed) subspace of some $Y\in\mathcal{C}$. A space $X$ is called {\em strongly $\mathcal{C}$-universal} if for every $Y\in\mathcal{C}$ and every closed subset $A \subseteq Y$, every map $\map fYX$ such that $f\restriction A$ is a \ensuremath{\operatorname{Z}}-embedding can be approximated by \ensuremath{\operatorname{Z}}-embeddings $\map gXY$ such that $g\restriction A=f\restriction A$. Similarly, one defines {\em $\mathcal{C}$-universality}, relaxing the above condition to the case $A=\emptyset$, that is, $X$ is {\em $\mathcal{C}$-universal} if every map $\map fYX$ of $Y\in\mathcal{C}$ can be approximated by \ensuremath{\operatorname{Z}}-embeddings.
\begin{fact}\label{univ-strong} Let $X$ be an ANR such that every \ensuremath{\operatorname{Z}}-set in $X$ is strong and let $\mathcal{C}$ be an open and closed hereditary topological class of spaces. If every open subspace $U\subseteq X$ is $\mathcal{C}$-universal then $X$ is strongly $\mathcal{C}$-universal. \end{fact}
Given a topological class $\mathcal{C}$ of spaces, we denote by $\sigma\mathcal{C}$ the class of all spaces of the form $X=\bigcup_{n\in\omega}X_n$, where each $X_n$ is closed in $X$ and $X_n \in\mathcal{C}$. Recall that $X$ is a {\em $\mathcal{C}$-absorbing space} if $X \in \sigma\mathcal{C}$ is a strongly $\mathcal{C}$-universal ANR which is a strong \ensuremath{\Z_\sigma}-space. In case $\mathcal{C}$ is closed hereditary, we can write $X=\bigcup_{n\in\omega}X_n$, where each $X_n$ is a strong \ensuremath{\operatorname{Z}}-set in $X$ and $X_n \in \mathcal{C}$.
We shall denote by $\mathfrak{M}_0$ and $\mathfrak{M}_1$ the classes of all compact metrizable spaces and all Polish spaces\footnote{I.e., separable completely metrizable spaces.} respectively. Let $\ensuremath{\Sigma}=\ensuremath{\operatorname{Q}}\setminus\ensuremath{\operatorname{s}}$ denote the pseudo-boundary\footnote{In some articles (e.g. \cite{BRZ}), $\Sigma$ denotes the {\em radial interior} of \ensuremath{\operatorname{Q}}, i.e., $\Sigma=\setof{x\in \ensuremath{\operatorname{Q}}}{\sup_{n\in\omega}|x(n)|<1}$. However, there is an auto-homeomorphism of $\ensuremath{\operatorname{Q}}$ which maps the pseudoboundary onto the radial interior.} of $\ensuremath{\operatorname{Q}}$.
\begin{fact}\label{3e4gwdfs} If $X$ is an $\mathfrak{M}_0$-absorbing homotopy dense subspace of\/ \ensuremath{\operatorname{Q}},
then $\pair \ensuremath{\operatorname{Q}} X\approx \pair \ensuremath{\operatorname{Q}}\ensuremath{\Sigma}$. In case $X\subseteq\ensuremath{\Q\setminus 0}$, $\pair{\ensuremath{\Q\setminus 0}}{X}\approx \pair{\ensuremath{\Q\setminus 0}}{\ensuremath{\Sigma}}$. \end{fact}
\begin{fact}\label{weosaijfpajf} Assume that $X$ is a both homotopy dense and homotopy negligible subset of a Hilbert cube manifold $M$. If $X$ is $\sigma$-compact then it is a strong \ensuremath{\Z_\sigma}-space. \end{fact}
\begin{proof} Assume $X=\bigcup_{n\in\omega}K_n$, where each $K_n$ is compact. Then each $K_n$ is closed in $M$ and therefore it is a strong \ensuremath{\operatorname{Z}}-set by Fact \ref{sedgfasf} (iii). \end{proof}
\section{Borel classes of several Hausdorff hyperspaces}\label{borelclasses}
Let $\pair{\tilde X}d$ denote the completion of $\pair Xd$. We identify $\operatorname{Bd}(X,d)$ with the subspace of $\operatorname{Bd}(\tilde X,d)$,
via the isometric embedding $A\mapsto \operatorname{cl}_{\tilde X}A$. Then, $\pair{\operatorname{Bd}(\tilde X)}{d_H}$ is a completion of $\pair{\operatorname{Bd}(X)}{d_H}$. Moreover, it should be noticed that
$A\in \operatorname{Bd}(\tilde X)\setminus \operatorname{Bd}(X)$ if and only if
$A\ne \operatorname{cl}_{\tilde X}(A\cap X)$. Saint Raymond proved in \cite[Th\'eor\`eme 1]{S} that
if $X$ is the union of a Polish subset and a $\sigma$-compact subset
then $\operatorname{Bd}_H(X)$ is $F_{\sigma\delta}$
(hence Borel) in $\operatorname{Bd}_H(\tilde X)$.\footnote
{In \cite{S}, $X$ is assumed to be a subspace of a compact metric space,
but the proof is valid without this assumption.
Moreover, it is also proved in \cite[Th\'eor\`eme 6]{S} that
if $\operatorname{Bd}_H(X)$ is absolutely Borel (i.e., Borel in its completion)
then $X$ is the union of a Polish subset and a $\sigma$-compact subset.} In particular, we have the following:
\begin{prop}\label{rthsdpio} If $X = \pair Xd$ is $\sigma$-compact
then the space $\pair{\operatorname{Bd}(X)}{d_H}$ is $F_{\sigma\delta}$
in its completion $\pair{\operatorname{Bd}(\tilde X)}{d_H}$. \end{prop}
Moreover, the following can be easily obtained
by adjusting the proof of \cite[Th\'eor\`eme 1]{S}:\footnote
{A similar result was proved by Costantini \cite{Cost}
for the Wijsman topology.}
\begin{prop}\label{owteepgsdgfa} If $X = \pair Xd$ is Polish ($d$ is not necessarily complete) then the space $\pair{\operatorname{Bd}(X)}{d_H}$ is $G_\delta$ in its completion $\pair{\operatorname{Bd}(\tilde X)}{d_H}$. \end{prop}
For the readers' convenience, direct short proofs of the above two propositions are given
in the Appendix. Combining Fact \ref{complete-separable} and Proposition \ref{owteepgsdgfa},
we have the following:
\begin{wn}\label{ppojnkjiu} If $X = \pair Xd$ is Polish in which every bounded set is totally bounded, then the space $\operatorname{Bd}_H(X)$ is also Polish. \end{wn}
Concerning the spaces $\operatorname{Nwd}(X)$ and $\operatorname{Perf}(X)$,
we prove here the following:
\begin{prop}\label{wetgivwet} For every separable metric space $X$, the space $\operatorname{Nwd}(X)$ is $G_\delta$ in $\operatorname{Bd}_H(X)$. \end{prop}
\begin{proof} Let $\ciag U$ be a countable open base for $X$. For each $n\in\omega$, let $$\mathcal{F}_n=\setof{A\in\operatorname{Bd}(X)}{U_n\subseteq A}.$$ Then each $\mathcal{F}_n$ is closed in $\operatorname{Bd}_H(X)$ and $\bigcup_{n\in\omega}\mathcal{F}_n=\operatorname{Bd}(X)\setminus\operatorname{Nwd}(X)$. \end{proof}
\begin{prop}\label{oeihgwef} If $X$ is locally compact then $\operatorname{Perf}(X)$ is $G_\delta$ in $\operatorname{Bd}_H(X)$. \end{prop}
\begin{proof} Let $\ciag U$ enumerate an open base of $X$ such that $\operatorname{cl} U_n$ is compact for every $n\in\omega$. Note that, by compactness, $(\operatorname{cl} U_n)^-$ is closed in $\operatorname{Bd}_H(X,d)$. For each $n,m\in\nat$ define $$\Phi(n,m)=\setof{\pair kl\in\nat^2}{U_k\cap U_l=\emptyset,\ U_k\cup U_l\subseteq \operatorname{B}(U_n,1/m)}.$$ We claim that $$\operatorname{Bd}(X,d)\setminus\operatorname{Perf}(X)=\bigcup_{n,m\in\nat}\bigcap_{\pair kl\in\Phi(n,m)} \Bigl( (\operatorname{cl} U_n)^-\setminus(U_k^-\cap U_l^-)\Bigr).$$ The set on the right-hand side is $F_{\sigma}$, so this will finish the proof.
Note that a closed set in a Polish space is perfect if and only if it has no isolated points. If $A\in \operatorname{Bd}(X,d)\setminus\operatorname{Perf}(X)$ then there is $y\in A$ which is isolated in $A$. We can find $n,m\in\nat$ such that $y\in U_n$ and $\operatorname{B}(U_n,1/m)\cap A=\sn y$. Then $A\in (\operatorname{cl} U_n)^-$ and $A\notin U_k^-\cap U_l^-$ whenever $\pair kl\in\Phi(n,m)$.
Conversely, assume that there are $n,m\in\nat$ such that $A\in(\operatorname{cl} U_n)^-$ and $A\notin U_k^-\cap U_l^-$ for every $\pair kl\in\Phi(n,m)$. Then $A\cap \operatorname{B}(U_n,1/m)\ne\emptyset$ and the second condition says that $A\cap \operatorname{B}(U_n,1/m)$ does not contain two points, so it is a singleton. Thus $A\notin\operatorname{Perf}(X)$. \end{proof}
Replacing $(\operatorname{cl} U_n)^-$ by $\operatorname{B}(U_n,1/m)^-$ in the formula from the proof above, we obtain the following:
\begin{wn} The space $\operatorname{Perf}(X)$ is $F_{\sigma\delta}$ in\/ $\operatorname{Bd}_H(X)$ if $X$ is Polish. \end{wn}
Since $\operatorname{Cantor}(\ensuremath{\mathbb R}^m) = \operatorname{Perf}(\ensuremath{\mathbb R}^m) \cap \operatorname{Nwd}(\ensuremath{\mathbb R}^m)$, the following is a combination of Propositions \ref{wetgivwet} and \ref{oeihgwef}:
\begin{wn} The space $\operatorname{Cantor}(\ensuremath{\mathbb R}^m)$ is $G_\delta$ in\/ $\operatorname{Bd}_H(\ensuremath{\mathbb R}^m)$. \end{wn}
Now, we shall prove the following:
\begin{prop} The space $\mathfrak N(\ensuremath{\mathbb R}^m)$ is Polish. \end{prop}
\begin{proof} Let $\ciag I$ enumerate all open rational cubes (i.e. products of rational intervals) in $\ensuremath{\mathbb R^m}$. Given $k\in\nat$, we define
$$S_k= \Bigl\{s\in\fin\nat : \sum_{n\in s}|I_n|<1/k\Bigr\},$$
where $|I|$ denotes the volume of the cube $I\subseteq \ensuremath{\mathbb R^m}$. We claim that $$\mathfrak N(\ensuremath{\mathbb R}^m)=\bigcap_{k\in\nat}\bigcup_{s\in S_k}\Bigl(\bigcup_{n\in s}I_n\Bigr)^+.$$
Clearly, if $A$ belongs to the right-hand side then for each $k\in\nat$ there is $s\subseteq\nat$ such that $A\subseteq\bigcup_{n\in s}I_n$ and $\sum_{n\in s}|I_n|<1/k$; therefore $A$ has Lebesgue measure zero.
Assume now $A$ has Lebesgue measure zero and fix $k<\nat$. Then $A\subseteq\bigcup_{n\in\omega}J_n$, where each $J_n$ is an open rational cube and $\sum_{n\in\omega}|J_n|<1/k$. By compactness, $A\subseteq J_0\cup\dots\cup J_{l-1}$ for some $m$ and $\{J_0,\dots,J_{l-1}\}=\setof{I_n}{n\in s}$ for some $s\in S_k$. Thus $A\in\bigcup_{s\in S_k}(\bigcup_{n\in s}I_n)^+$. \end{proof}
\section{Almost convex metric spaces}\label{whfijfpiapfi}
Recall that a metric $d$ on $X$ is {\em almost convex} if for every $\alpha>0$, $\beta>0$ and for every $x,y\in X$ such that $d(x,y)<\alpha+\beta$, there exists $z\in X$ with $d(x,z)<\alpha$ and $d(z,y)<\beta$.
Fix a dense set $X$ in a separable Banach space $E$. Let $d$ denote the metric on $X$ induced by the norm of $E$. Then $\pair Xd$ is an almost convex metric space and therefore by a result of \cite{CK} the space $\operatorname{Bd}(X,d)$ is an absolute retract. In case where $X$ is $G_\delta$, the space $\operatorname{Bd}(X,d)$ is completely metrizable by Proposition \ref{owteepgsdgfa}. If additionally $E$ is finite-dimensional then $\operatorname{Bd}(X,d)$ is Polish by Corollary \ref{ppojnkjiu}. In case where $X$ is $\sigma$-compact, by Proposition \ref{rthsdpio}, $\operatorname{Bd}(X,d)$ is absolutely $F_{\sigma\delta}$. It is natural to ask whether these spaces or their subspaces, discussed in \S\ref{borelclasses}, are homeomorphic to some standard spaces. Such standard spaces appear as homotopy dense subspaces of the Hilbert cube \ensuremath{\operatorname{Q}}.
Let $\operatorname{UNb}(X,d)$ denote the family of all sets of the form $\overline{\bal}(C,t)$, the closed $t$-neighborhood of $C\in\operatorname{Bd}(X,d)$, where $t>0$.
\begin{prop}\label{sdgeriphgpwo} If $\pair Xd$ is an almost convex metric space then the subspace $\operatorname{UNb}(X,d)$ is homotopy dense in $\operatorname{Bd}(X,d)$. \end{prop}
\begin{proof} Define a homotopy $\map h{\operatorname{Bd}(X,d)\times[0,1]}{\operatorname{Bd}(X,d)}$ by the formula: $$h(A,t)=\overline{\bal}(A,t).$$
It suffices to verify the continuity of $h$ with respect to Hausdorff metric topology. It has been checked in \cite{CK} that $d_H(\overline{\bal}(A,t),\overline{\bal}(A,s))\leqslant|t-s|$. Thus we have \begin{align*} d_H(h(A,t),h(B,s))&\leqslant d_H(h(A,t),h(A,s))+d_H(h(A,s),h(B,s))\\
&\leqslant |t-s|+d_H(h(A,s),h(B,s)). \end{align*} It remains to check that $d_H(\overline{\bal}(A,s),\overline{\bal}(B,s))\leqslant d_H(A,B)$.
To complete the proof, we show the following: $$r>d_H(A,B),\ \varepsilon>0 \Longrightarrow
r+\varepsilon\geqslant d_H(\overline{\bal}(A,s),\overline{\bal}(B,s)),$$ For this aim, it suffices to check that $\overline{\bal}(A,s)\subseteq \operatorname{B}(\overline{\bal}(B,s),r+\varepsilon)$; then by symmetry we shall also get $\overline{\bal}(B,s)\subseteq \operatorname{B}(\overline{\bal}(A,s),r+\varepsilon)$.
For each $x\in \overline{\bal}(A,s)$, choose $a\in A$ such that $d(x,a)<s+\varepsilon$. There is $b\in B$ such that $d(a,b)<r$. Then we have $d(x,b)<s+r+\varepsilon$. Using the almost convexity of $d$, we can find $y$ such that $d(b,y)<s$ and $d(y,x)<r+\varepsilon$. Then $y\in\operatorname{B}(B,s)$ and hence $x\in\operatorname{B}(y,r+\varepsilon)\subseteq\operatorname{B}(\overline{\bal}(B,s),r+\varepsilon)$. \end{proof}
Denote by $\operatorname{Reg}(X,d)$ the hyperspace of all nonempty bounded regularly closed subsets of a metric space $\pair Xd$. Clearly, $\operatorname{UNb}(X,d)\subseteq\operatorname{Reg}(X,d)$.
\begin{wn}\label{owehfafafs} Let $\pair Xd$ be an almost convex metric space and $D\subseteq X$ a dense set. Then the spaces $\operatorname{Reg}(X,d)$ and $\operatorname{Bd}(D,d)$ are homotopy dense in $\operatorname{Bd}(X,d)$. \end{wn}
\begin{proof} Regarding $\operatorname{Bd}(D,d)\subseteq\operatorname{Bd}(X,d)$ via the embedding $A\mapsto \operatorname{cl}_X A$, we have $\operatorname{Reg}(X,d)\subseteq\operatorname{Bd}(D,d)$. This follows from the fact that $\operatorname{cl}(D\cap U)=\operatorname{cl} U$ for every open set $U\subseteq X$. Since $\operatorname{UNb}(X,d)$ is homotopy dense in $\operatorname{Bd}(X,d)$ by Proposition \ref{sdgeriphgpwo} and $\operatorname{UNb}(X,d)\subseteq\operatorname{Reg}(X,d)$, we have the result. \end{proof}
\section{Strict deformations}
Assume we are looking at certain homotopy dense subspaces of the Hilbert cube \ensuremath{\operatorname{Q}}. Let $X \supseteq X_0$ be such spaces. If $X_0 \approx \ensuremath{\Sigma}$ then, in order to conclude that $\pair\ensuremath{\operatorname{Q}} X\approx \pair\ensuremath{\operatorname{Q}}\ensuremath{\Sigma}$, it suffices to check that $X$ is a \ensuremath{\Z_\sigma}-set in \ensuremath{\operatorname{Q}}, by applying \cite[Theorem 6.6]{Chapman}. However, to see that $X_0 \approx \ensuremath{\Sigma}$, we have to check that $X_0$ is strongly $\mathfrak{M}_0$-universal. Below is a tool which simplifies this step. To formulate it, we need some extra notions concerning homotopies.
A homotopy $\map\varphi{X\times [0,1]}X$ is called a {\em strict deformation} if $\varphi_0=\operatorname{id}$ and $$\varphi(x,t)=\varphi(x',t')\land t>0\land t'>0\implies x=x'.$$ It is said that $\varphi$ {\em omits} $A\subseteq X$ if $\img\varphi{X\times(0,1]}\cap A=\emptyset$. Finally, we say that a space $X$ is {\em strictly homotopy dense} in $Y$ if $X\subseteq Y$ and there exists a strict deformation which omits $Y\setminus X$ (so in particular $X$ is homotopy dense in $Y$).
\begin{lm}\label{mbysld} For every $\ensuremath{\operatorname{Z}}$-set $A$ in a $\ensuremath{\operatorname{Q}}$-manifold $M$, there exists a strict deformation of $M$ which omits $A$. \end{lm}
\begin{proof} Find a \ensuremath{\operatorname{Z}}-embedding $\map{f_0}MM$
which is properly $2^{-2}$-homotopic to the identity
and so that $\img{f_0}M\cap A=\emptyset$. Further, find a \ensuremath{\operatorname{Z}}-embedding $\map{f_1}MM$
which is properly $2^{-3}$-homotopic to the identity
and $\img{f_1}M\cap(\img{f_0}M\cup A)=\emptyset$. Continuing this way,
we find \ensuremath{\operatorname{Z}}-embeddings $\map{f_n}MM$, $n\in\omega$,
such that $f_n$ is properly $2^{-n-2}$-homotopic to the identity and $$\img{f_n}M\cap (\img{f_{n-1}}M\cup \dots\cup \img{f_0}M\cup A)=\emptyset.$$ Then, we have proper $2^{-(n+1)}$-homotopies $\map{g^n}{M\times[0,1]}M$,
$n\in\omega$, such that $g^n_0 = f_n$ and $g^n_1 = f_{n+1}$. We can define a homotopy $\map{g}{M\times[0,1]}M$ by
$g(x,0) = x$ and $$g(x,t) = g^n(x,2 - 2^{n+1}t)
\;\text{ for $2^{-(n+1)} \leqslant t \leqslant 2^{-n}$, $n\in\omega$.}$$ Note that $g_{2^{-n}} = f_n$ for each $n\in\omega$,
each $g\restriction M\times[2^{-n-1},2^{-n}]$ is proper
and $2^{-n-1}$-close to the projection $\map{\operatorname{pr}_M}{M\times(0,2^{-n}]}M$. The continuity of $g$ at $(x,0)$ is guaranteed by the last fact. Using the strong $\mathfrak{M}_0$-universality of $M$
(see \cite[Theorem 1.1.26]{BRZ}),
we can inductively obtain $\map{h_n}{M\times[0,1]}M$, $n\in\omega$, such that \begin{enumerate} \item
$h_n\restriction M\times[2^{-n-1},1]$ is a $Z$-embedding, \item
$h_n\restriction M\times[2^{-n},1] = h_{n-1}\restriction M\times[2^{-n},1]$, \item
$h_n\restriction M\times[0,2^{-n-1}] = g\restriction M\times[0,2^{-n-1}]$, \item
$h_n\restriction M\times[2^{-n-1},2^{-n}]$ is $2^{-n-1}$-close
to $g\restriction M\times[2^{-n-1},2^{-n}]$,
hence it is $2^{-n}$-close to $\map{\operatorname{pr}_M}{M\times[2^{-n-1},2^{-n}]}M$, \item
$\img{h_n}{M\times[2^{-n-1},1]}$ is disjoint from $A$. \end{enumerate} Finally, the limit $h = \lim_{n\to\infty} h_n$ is the desired one. \end{proof}
\begin{tw}\label{ejgrpio} Assume that $X$ is a \ensuremath{\Z_\sigma}-subset of a $\ensuremath{\operatorname{Q}}$-manifold $M$ which is strictly homotopy dense in $M$. Then $X$ is an $\mathfrak{M}_0$-absorbing space. In particular, if $M\approx\ensuremath{\operatorname{Q}}$ then $\pair MX\approx\pair\ensuremath{\operatorname{Q}}\ensuremath{\Sigma}$ and if $M\approx\ensuremath{\Q\setminus 0}$ then $\pair MX\approx \pair{\ensuremath{\Q\setminus 0}}{\ensuremath{\Sigma}}$. \end{tw}
\begin{proof} The assumption says in particular that $X$ is homotopy dense in $M$, so it follows from Proposition \ref{wetafqwtrqf} that $X$ is an ANR being a strong \ensuremath{\Z_\sigma}-space. It remains to check that $X$ is strongly $\mathfrak{M}_0$-universal. For the additional statement, we can just apply Fact \ref{3e4gwdfs}.
Fix a map $\map fAX$ of a compact metric space such that $f\restriction B$ is a \ensuremath{\operatorname{Z}}-embedding, where $B\subseteq A$ is closed. Note that every compact subset of $X$ is a \ensuremath{\operatorname{Z}}-set in $M$, hence it is a \ensuremath{\operatorname{Z}}-set in $X$ by Fact \ref{sedgfasf} (ii), so we just have to preserve $f\restriction B$, not worrying about \ensuremath{\operatorname{Z}}-sets. We assume that $A$ is endowed with the metric such that $\operatorname{diam}(A)\loe1$. Fix $\varepsilon>0$. Using the strong $\mathfrak{M}_0$-universality of $M$ (see \cite[Theorem 1.1.26]{BRZ}), we can find a \ensuremath{\operatorname{Z}}-embedding $\map gAM$ which is $\varepsilon/2$-close to $f$ and such that $\img g{A\setminus B}\cap X=\emptyset$ (here we use the fact that $X$ is a $\ensuremath{\operatorname{Z}}_{\sigma}$-set in $M$~and also that $\img fB$ is a \ensuremath{\operatorname{Z}}-set in $M$).
By Lemma \ref{mbysld}, we have a strict deformation $\map \varphi{M\times[0,1]}M$ which omits $\img fB$. Fix a metric $d$ for $M$ and choose a map $\map\gamma A{[0,1]}$ so that $\gamma^{-1}(0)=B$ and $$d(g(a),\varphi(g(a),\gamma(a)))<\varepsilon/4 \;\text{ for every $a\in A$.}$$ On the other hand, by the assumption, there is a strict deformation $\map \psi{M\times[0,1]}M$ which omits $M\setminus X$. Define $\map hAX$ by setting $$h(a)=\psi(\varphi(g(a),\gamma(a)),\delta(a)),$$ where $\map\delta A{[0,1]}$ is a map chosen so that $B=\delta^{-1}(0)$ and $$d(h(a),\varphi(g(a),\gamma(a)))<\min\{\varepsilon/4,\ \operatorname{dist}(\varphi(g(a),\gamma(a)),\img fB)\}.$$ This ensures us that $h$ is $\varepsilon/2$-close to $g$ and that $h(a)\notin \img fB$ whenever $a\in A\setminus B$. Then $h$ is a map which is $\varepsilon$-close to $f$ and $\img hA\subseteq X$. Furthermore, $h\restriction B=g\restriction B=f\restriction B$. It remains to check that $h$ is one-to-one (then it is a \ensuremath{\operatorname{Z}}-embedding, since every compact set in $X$ is a \ensuremath{\operatorname{Z}}-set).
Suppose $h(a)=h(a')$. If $a,a'\in B$ then $g(a)=g(a')$ and consequently $a=a'$. When $a,a'\in A\setminus B$, since $\psi$ and $\varphi$ are strict deformations, $g(a)=g(a')$ and hence $a=a'$. In case $a\in B$ and $a'\notin B$, we have $h(a)=g(a)=f(a)\in\img fB$ but $h(a')\notin \img fB$ because $\varphi$ omits $\img fB$. Thus, this case does not occur. \end{proof}
\section{Pseudo-interiors of $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$}
Throughout this section,
$m>0$ is a fixed natural number. A particular case of a well known theorem of Curtis \cite{Curtis} says
that $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})=\exp(\ensuremath{\mathbb R^m})$ is homeomorphic to $\ensuremath{\Q\setminus 0}$. We shall consider the standard (convex) Euclidean metric $d$ on $\ensuremath{\mathbb R^m}$. In this section, we investigate various $G_\delta$ subspaces of $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$. The main result of this section is the following:
\begin{tw}\label{pint-hyperspace} Let $\mathcal{F} \subseteq \operatorname{Bd}_H(\ensuremath{\mathbb R^m})$ be one of the subspaces below: $$\operatorname{Nwd}(\ensuremath{\mathbb R^m}),\ \operatorname{Perf}(\ensuremath{\mathbb R^m}),\ \operatorname{Cantor}(\ensuremath{\mathbb R^m}),\ \mathfrak N(\ensuremath{\mathbb R^m}),\ \operatorname{Bd}(D),$$ where $D$ is a dense $G_\delta$ set in $\ensuremath{\mathbb R^m}$ such that\/
$\ensuremath{\mathbb R^m} \setminus D$ is also dense in $\ensuremath{\mathbb R^m}$ and
in case $m > 1$ it is assumed that $D = \img pD \times \ensuremath{\mathbb R}$,
where $p : \ensuremath{\mathbb R^m} \to \ensuremath{\mathbb R}^{m-1}$ is the projection onto the first $m-1$ coordinates. Then the pair\/ $\pair{\operatorname{Bd}(\ensuremath{\mathbb R^m})}{\mathcal{F}}$ is homeomorphic to
$\pair\ensuremath{\Q\setminus 0}\ensuremath{\pseudoint\setminus0}$. \end{tw}
Applying Theorem \ref{pint-hyperspace} above, we have
\begin{wn} Suppose $\pair mk = \pair 10$ or $0 \leqslant k < m - 1$. Then, $$\pair{\operatorname{Bd}(\ensuremath{\mathbb R^m})}{\operatorname{Bd}(\nu^m_k)} \approx \pair\ensuremath{\Q\setminus 0}\ensuremath{\pseudoint\setminus0}.$$ Consequently, $\operatorname{Bd}_H(\nu^m_k) \approx \ell_2$. \end{wn}
\begin{proof} As a direct consequence of Theorem \ref{pint-hyperspace}, we have $$\pair{\operatorname{Bd}(\ensuremath{\mathbb R})}{\operatorname{Bd}(\nu^1_0)} = \pair{\operatorname{Bd}(\ensuremath{\mathbb R})}{\operatorname{Bd}(\ensuremath{\mathbb R}\setminus\mathbb{Q})}
\approx \pair\ensuremath{\Q\setminus 0}\ensuremath{\pseudoint\setminus0}.$$ For each $0 \leqslant k < m - 1$,
observe that
$\ensuremath{\mathbb R^m} \setminus (\nu^{m-1}_k\times\ensuremath{\mathbb R})
= (\ensuremath{\mathbb R}^{m-1} \setminus \nu^{m-1}_k)\times\ensuremath{\mathbb R}
\subseteq \ensuremath{\mathbb R^m} \setminus \nu^m_k$. Thus, it follows that $$\operatorname{Bd}(\ensuremath{\mathbb R^m}) \setminus \operatorname{Bd}(\nu^{m-1}_k\times\ensuremath{\mathbb R})
\subseteq \operatorname{Bd}(\ensuremath{\mathbb R^m}) \setminus \operatorname{Bd}(\nu^m_k).$$ By Proposition \ref{owteepgsdgfa} and Corollary \ref{owehfafafs},
$\operatorname{Bd}(\nu^m_k)$ is a homotopy dense $G_\delta$ set in $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$,
which implies that $\operatorname{Bd}(\ensuremath{\mathbb R^m}) \setminus \operatorname{Bd}(\nu^m_k)$
is a \ensuremath{\Z_\sigma}-set in $\operatorname{Bd}(\ensuremath{\mathbb R^m})$. On the other hand,
we can apply Theorem \ref{pint-hyperspace} to obtain $$\pair{\operatorname{Bd}(\ensuremath{\mathbb R^m})}{\operatorname{Bd}(\ensuremath{\mathbb R^m})\setminus\operatorname{Bd}(\nu^{m-1}_k\times\ensuremath{\mathbb R})}
\approx \pair{\ensuremath{\Q\setminus 0}}{\ensuremath{\Sigma}}.$$ Then, it follows from Theorem 6.6 in \cite{Chapman} that $$\pair{\operatorname{Bd}(\ensuremath{\mathbb R^m})}{\operatorname{Bd}(\ensuremath{\mathbb R^m})\setminus\operatorname{Bd}(\nu^m_k)}
\approx \pair\ensuremath{\Q\setminus 0}\ensuremath{\Sigma}.$$ Thus, we have the result. \end{proof}
The conclusion of Theorem \ref{pint-hyperspace} is equivalent to $$\pair{\operatorname{Bd}_H(\ensuremath{\mathbb R^m})}{\operatorname{Bd}_H(\ensuremath{\mathbb R^m})\setminus\mathcal{F}} \approx \pair\ensuremath{\Q\setminus 0}\ensuremath{\Sigma}.$$ We saw in \S\ref{borelclasses} that
the subspace $\mathcal{F} \subseteq \operatorname{Bd}(\ensuremath{\mathbb R^m})$ in Theorem \ref{pint-hyperspace}
is $G_\delta$, that is,
$\operatorname{Bd}_H(\ensuremath{\mathbb R^m})\setminus\mathcal{F}$ is $F_\sigma$ in $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$. If $\mathcal{F}$ contains a homotopy dense subset of $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$
then the complement $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})\setminus\mathcal{F}$ is a \ensuremath{\Z_\sigma}-set. Thus, in order to apply Theorem \ref{ejgrpio} to obtain the result,
it suffices to show that
$\mathcal{F}$ contains a homotopy dense subset of $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$
and the complement $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})\setminus\mathcal{F}$ contains
a strictly homotopy dense subset of $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$. Observe that $$\operatorname{Fin}(\ensuremath{\mathbb R^m}) \subseteq \mathfrak N(\ensuremath{\mathbb R^m}) \subseteq \operatorname{Nwd}(\ensuremath{\mathbb R^m}) \quad\text{and}\quad
\operatorname{Cantor}(\ensuremath{\mathbb R^m})\subseteq\operatorname{Perf}(\ensuremath{\mathbb R^m}).$$
As a special case of a well known result
due to Curtis and Nguyen To Nhu \cite{CuNhu},
we have $$\pair{\operatorname{Bd}_H(\ensuremath{\mathbb R^m})}{\operatorname{Fin}(\ensuremath{\mathbb R^m})} = \pair{\exp(\ensuremath{\mathbb R^m})}{\operatorname{Fin}(\ensuremath{\mathbb R^m})}
\approx\pair\ensuremath{\Q\setminus 0}{\ensuremath{\operatorname{Q}}_f\setminus0},$$ where $\ensuremath{\operatorname{Q}}_f$ denotes the subspace of $\ensuremath{\operatorname{Q}}$
consisting of all eventually zero sequences,
which is homotopy dense in $\ensuremath{\operatorname{Q}}$. This fact implies the following:
\begin{lm}\label{fin-h-dense} The subspace $\operatorname{Fin}(\ensuremath{\mathbb R^m})$ is homotopy dense in $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$. \end{lm}
Using Lemma \ref{fin-h-dense} above, we can easily show the following:
\begin{lm} The space $\operatorname{Cantor}(\ensuremath{\mathbb R^m})$ is homotopy dense in $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$. \end{lm}
\begin{proof} Let $h$ be a homotopy of $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$ which witnesses
that $\operatorname{Fin}(\ensuremath{\mathbb R^m})$ is homotopy dense,
i.e., $h(A,t)$ is a finite set for every $t > 0$. Choose a Cantor set $C \subseteq [0,1]^m$ with $0 \in C$
and define a homotopy $\map\varphi{\operatorname{Bd}_H(\ensuremath{\mathbb R^m})\times[0,1]}{\operatorname{Bd}_H(\ensuremath{\mathbb R^m})}$ by $$\varphi(A,t) = h(A,t) + tC.$$ Then $\varphi_0 = \operatorname{id}$ and $\varphi(A,t)\in\operatorname{Cantor}(\ensuremath{\mathbb R^m})$ for every $t>0$
because a finite union of Cantor sets is a Cantor set. \end{proof}
Concerning the space $\operatorname{Bd}(D)$ in Theorem \ref{pint-hyperspace},
we have shown in Corollary \ref{owehfafafs} that
it is homotopy dense in $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$. Thus, to complete the proof of Theorem \ref{pint-hyperspace},
it remains to show the following:
\begin{lm}\label{strict-homotopy} Under the same assumption as Theorem \ref{pint-hyperspace},
each of the following spaces
are strictly homotopy dense in $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$: $$\operatorname{Bd}(\ensuremath{\mathbb R^m}) \setminus \operatorname{Nwd}(\ensuremath{\mathbb R^m}),\ \operatorname{Bd}(\ensuremath{\mathbb R^m}) \setminus \operatorname{Perf}(\ensuremath{\mathbb R^m}),\
\operatorname{Bd}(\ensuremath{\mathbb R^m}) \setminus \operatorname{Bd}(D).$$ \end{lm}
First, we show the following lemma,
which also gives a direct proof of Lemma \ref{fin-h-dense}:
\begin{lm}\label{sdegfaqqfas} For $D \subseteq \ensuremath{\mathbb R^m}$,
if $\ensuremath{\mathbb R^m} \setminus D$ is dense in $\ensuremath{\mathbb R^m}$
then $\operatorname{Fin}(\ensuremath{\mathbb R^m}) \setminus \operatorname{Bd}(D)$ is homotopy dense in $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$. \end{lm}
\begin{proof} Let $\mathcal H = \operatorname{Fin}(\ensuremath{\mathbb R^m}) \setminus \operatorname{Bd}(D)$, that is,
$\mathcal H$ consists of all nonempty finite sets $A \subseteq \ensuremath{\mathbb R^m}$
such that $A \setminus D \ne\emptyset$. Then $\mathcal H$ is dense in $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$. Moreover, $\mathcal H$ is closed under finite unions,
i.e., $A \cup B \in \mathcal H$ whenever $A, B \in\mathcal H$. Recall that $\pair{\operatorname{Bd}_H(\ensuremath{\mathbb R^m})}\cup$ is a Lawson semilattice
(see \cite{Lawson}), that is,
the union operator $\pair AB \mapsto A \cup B$ is continuous
and $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$ has an open base consisting of subsemilattices;
namely, every open ball with respect to the Hausdorff metric
is a subsemilattice of $\pair{\operatorname{Bd}_H(\ensuremath{\mathbb R^m})}\cup$. By virtue of \cite[Theorem 5.1]{KSY},
it suffices to show that $\mathcal H$ is relatively $LC^0$ in $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$. Recall that a subspace $Y$ of a space $X$ is {\em relatively $LC^0$ in} $X$
if every neighborhood $U$ of each $x \in X$ contains a neighborhood
$V$ of $x$ in $X$ such that
every $a,b \in V \cap Y$ can be joined by a path in $U \cap Y$.
Fix $A \in \operatorname{Bd}_H(\ensuremath{\mathbb R^m})$ and $\varepsilon > 0$. For each $A_0,A_1 \in \operatorname{B}_{d_H}(A,\varepsilon/2) \cap \mathcal H$,
we describe how to construct a path in $\operatorname{B}_{d_H}(A,\varepsilon)\cap \mathcal H$
which joins $A_0$ to $A_0 \cup A_1$. Let $A_1 = \{p_0,\dots,p_{n-1}\}$. For each $i < n$, find $q_i\in A_0$ such that $\norm{p_i-q_i} < \varepsilon/2$,
and define $$h(t) = A_0 \cup \setof{(1-t)q_i + tp_i}{i<n}
\quad\text{for each $t \in [0,1]$.}$$ Then $h(t) \in \mathcal H$ because $A_0 \subseteq h(t) \in \operatorname{Fin}(\ensuremath{\mathbb R^m})$. Further, $d_H(A_0,h(t)) < \varepsilon/2$, that is, $h(t)\in \operatorname{B}_{d_H}(A,\varepsilon)$. Finally, $h(0) = A_0$ and $h(1) = A_0\cup A_1$. By the same argument,
we can construct a path in $\operatorname{B}_{d_H}(A,\varepsilon) \cap \mathcal H$
which joins $A_0 \cup A_1$ to $A_1$. \end{proof}
\begin{proof}[Proof of Lemma \ref{strict-homotopy}] First, we show the case $m=1$. It suffices to construct a strict deformation
$\map\varphi{\operatorname{Bd}_H(\ensuremath{\mathbb R})\times[0,1]}{\operatorname{Bd}_H(\ensuremath{\mathbb R})}$
which omits $\operatorname{Nwd}(\ensuremath{\mathbb R}) \cup \operatorname{Perf}(\ensuremath{\mathbb R}) \cup \operatorname{Bd}(D)$. Let $h$ be a homotopy of $\operatorname{Bd}(\ensuremath{\mathbb R})$ which witnesses
that $\operatorname{Fin}(\ensuremath{\mathbb R})\setminus\operatorname{Bd}(D)$ is homotopy dense (Lemma \ref{sdegfaqqfas}). Since $\operatorname{Bd}_H([1,2]) \approx \ensuremath{\operatorname{Q}}$,
we have an embedding $g : \operatorname{Bd}_H(\ensuremath{\mathbb R}) \to \operatorname{Bd}_H([1,2])$. The desired $\varphi$ can be defined as follows: $$\varphi(A,t) = h(A,t) \cup \{\max h(A,t) + [t,2t],\
\min h(A,t) - tg(A)\}.$$
For each $t > 0$,
it is clear that $\varphi(A,t) \notin \operatorname{Nwd}(\ensuremath{\mathbb R}) \cup \operatorname{Perf}(\ensuremath{\mathbb R})$. Since $h(A,t)$ contains an isolated point from $\ensuremath{\mathbb R} \setminus D$
which remains to be isolated in $\varphi(A,t)$,
we see that $\varphi(A,t) \notin \operatorname{Bd}(D)$. Given $\varphi(A,t)$ for $t>0$,
we can reconstruct $t$ as the length of
the interval $J \subseteq \varphi(A,t)$ with $\max J = \max\varphi(A,t)$. Consequently, $g(A)$ can be reconstructed from $\varphi(A,t)$. Thus, $\varphi$ is a strict deformation.
Next, we show the case $m > 1$. To see that $\operatorname{Bd}(\ensuremath{\mathbb R^m}) \setminus \operatorname{Perf}(\ensuremath{\mathbb R^m})$ and
$\operatorname{Bd}(\ensuremath{\mathbb R^m}) \setminus \operatorname{Bd}(D)$ are strictly homotopy dense in $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$,
we shall construct a strict deformation
$\map\varphi{\operatorname{Bd}_H(\ensuremath{\mathbb R^m})\times[0,1]}{\operatorname{Bd}_H(\ensuremath{\mathbb R^m})}$
which omits $\operatorname{Perf}(\ensuremath{\mathbb R^m}) \cup \operatorname{Bd}(D)$. Recall $p : \ensuremath{\mathbb R^m} \to \ensuremath{\mathbb R}^{m-1}$ is the projection
onto the first $m - 1$ coordinates. Note that $\img pD$ is a dense $G_\delta$ set in $\ensuremath{\mathbb R}^{m-1}$
and $\ensuremath{\mathbb R}^{m-1} \setminus \img pD$ is also dense in $\ensuremath{\mathbb R}^{m-1}$. Let $e_m=\seq{0,0,\dots,0,1}\in\ensuremath{\mathbb R^m}$.
Since $\ensuremath{\mathbb R^m} \setminus (\img pD \times \ensuremath{\mathbb R})$ is dense in $\ensuremath{\mathbb R^m}$,
it follows from Lemma \ref{sdegfaqqfas}
that $\operatorname{Fin}(\ensuremath{\mathbb R^m}) \setminus \operatorname{Bd}(\img pD \times \ensuremath{\mathbb R})$
is homotopy dense in $\operatorname{Bd}_H(\ensuremath{\mathbb R^m})$. Let $h$ be a homotopy of $\operatorname{Bd}(\ensuremath{\mathbb R^m})$ which witnesses this,
i.e., for $t > 0$, $h(A,t)$ is finite and $p[h(A,t)] \not\subseteq \img pD$. Since $\operatorname{Bd}_H([3/5,2/3]) \approx \ensuremath{\operatorname{Q}}$,
we have an embedding $g : \operatorname{Bd}_H(\ensuremath{\mathbb R^m}) \to \operatorname{Bd}_H([3/5,2/3])$. The desired $\varphi$ can be defined as follows: $$\varphi(A,t) = h(A,t) + t\left(\bigcup_{i\in\omega}
2^{-i}(g(A) \cup [3/4,1])e_m \cup \{2e_m\}\right).$$
For each $t > 0$,
$\varphi(A,t)$ has an isolated point because
$\max \operatorname{pr}_m[\varphi(A,t)]$ is attained by an isolated point of $\varphi(A,t)$,
where $\operatorname{pr}_m$ denotes the projection onto the $m$-th coordinate. Hence, $\varphi(A,t) \not\in \operatorname{Perf}(\ensuremath{\mathbb R^m})$. Since $\img p{\varphi(A,t)} = \img p{h(A,t)}$ is finite
and contains a point of $\ensuremath{\mathbb R}^{m-1} \setminus \img pD$,
it follows that
$\operatorname{cl}(\varphi(A,t) \cap (\img pD \times \ensuremath{\mathbb R})) \not= \varphi(A,t)$,
which means $\varphi(A,t) \not\in \operatorname{Bd}(\img pD \times \ensuremath{\mathbb R})$.
Given $\varphi(A,t)$ for $t > 0$,
we can find $t$ as the distance from
$\max\operatorname{pr}_m[\varphi(A,t)]$ to the interior of $\operatorname{pr}_m[\varphi(A,t)]$. Let $a_0 \in \varphi(A,t)$ be such that $$\operatorname{pr}_m(a_0) = \min\operatorname{pr}_m[\varphi(A,t)] = \min\operatorname{pr}_m[h(A,t)].$$ Then, for sufficiently large $i$, $$(a_0 + 2^{-i}t(g(A) \cup [3/4,1])e_m) \cap h(A,t) = \emptyset.$$ Thus, we can reconstruct $2^{-i}tg(A)$ and
consequently also $g(A)$ from $\varphi(A,t)$. This shows that $\varphi$ is a strict deformation.
For $\operatorname{Bd}(\ensuremath{\mathbb R^m}) \setminus \operatorname{Nwd}(\ensuremath{\mathbb R^m})$,
we define a homotopy
$\map\psi{\operatorname{Bd}_H(\ensuremath{\mathbb R^m})\times[0,1]}{\operatorname{Bd}_H(\ensuremath{\mathbb R^m})}$ as follows: $$\psi(A,t) = h(A,t) + t\left(\bigcup_{i\in\omega}
2^{-i}(g(A) \cup [3/4,1])e_m \cup \overline{\bal}(2e_m,1/2)\right).$$ In other wards,
replacing the points $a + 2te_m \in \varphi(A,t)$, $a \in h(A,t)$,
by the closed balls $$a + t\overline{\bal}(2e_m,1/2) = \overline{\bal}(a + 2te_m,t/2),\ a \in h(A,t),$$
we can obtain $\psi(A,t)$ from $\varphi(A,t)$. Evidently $\psi$ omits $\operatorname{Nwd}(\ensuremath{\mathbb R^m})$. Given $\psi(A,t)$ for $t > 0$,
let $a_0 \in \psi(A,t)$ be such that $$\operatorname{pr}_m(a_0) = \min\operatorname{pr}_m[\psi(A,t)] = \min\operatorname{pr}_m[h(A,t)].$$ Then we can get $t$ as the diameter of
the ball $\overline{\bal}(a_0 + 2te_m,t/2)$
(which is equal to $2/3$ of the distance from $a_0$ to this ball). Now, by the same arguments as for $\varphi$,
we can reconstruct $g(A)$ from $\psi(A,t)$. Thus, $\psi$ is a strict deformation. \end{proof}
Let us note that the subspace $\operatorname{UNb}(\ensuremath{\mathbb R})\cup\operatorname{Fin}(\ensuremath{\mathbb R})$ is actually equal to the space $\operatorname{Pol}(\ensuremath{\mathbb R})$ consisting of all compact polyhedra in $\ensuremath{\mathbb R}$. It follows from the result of \cite{Sakai91} that the pair $\pair{\exp(\ensuremath{\mathbb R})}{\operatorname{Pol}(\ensuremath{\mathbb R})}$ is homeomorphic to $\pair\ensuremath{\operatorname{Q}}{\ensuremath{\operatorname{Q}}_f}$.
\section{Nonseparable components of $\operatorname{Cld}_H(\ensuremath{\mathbb R})$}
In this section,
we consider the space $\operatorname{Cld}_H(\ensuremath{\mathbb R})$ of all nonempty closed subsets of $\ensuremath{\mathbb R}$. We shall also consider its natural subspaces,
using the same notation as before,
but having in mind the new setting. For example,
$\operatorname{Perf}(\ensuremath{\mathbb R})$ and $\operatorname{Nwd}(\ensuremath{\mathbb R})$ will denote the subspace of $\operatorname{Cld}(\ensuremath{\mathbb R})$
consisting of all perfect closed subsets of $\ensuremath{\mathbb R}$
and all closed sets with no interior points, respectively. Now $\operatorname{Perf}(\ensuremath{\mathbb R}) \cap \operatorname{Nwd}(\ensuremath{\mathbb R})$ consists of all nonempty closed
(possibly unbounded) subsets of $\ensuremath{\mathbb R}$
which have neither isolated points nor interior points. In the new setting,
we have $$\operatorname{Cantor}(\ensuremath{\mathbb R}) = \operatorname{Perf}(\ensuremath{\mathbb R}) \cap \operatorname{Nwd}(\ensuremath{\mathbb R}) \cap \operatorname{Bd}(\ensuremath{\mathbb R}).$$
As shown in \cite[Proposition 7.2]{KuSaY},
$\operatorname{Cld}_H(\ensuremath{\mathbb R})$ has $2^{\aleph_0}$ many components,
$\operatorname{Bd}(\ensuremath{\mathbb R})$ is the only separable one and
any other component has weight $2^{\aleph_0}$. The following is the main theorem in this section:
\begin{tw}\label{non-sep-compon} Let\/ $\mathcal H$ be a nonseparable component of\/ $\operatorname{Cld}_H(\ensuremath{\mathbb R})$
which does not contain\/ $\ensuremath{\mathbb R}$, $[0,+\infty)$, $(-\infty,0]$. Then $\mathcal H \approx \ell_2(2^{\aleph_0})$. \end{tw}
We shall say that a set $A\subseteq\ensuremath{\mathbb R}$ {\em has infinite uniform gaps}
if there are $\delta>0$ and pairwise disjoint open intervals $I_0,I_1,\dots$
such that $\operatorname{diam} I_n \geqslant \delta$, $A \cap I_n = \emptyset$
and $\operatorname{bd} I_n \subseteq A$ for every $n\in\omega$. Define $$\mathcal{V} = \setof{A\in\operatorname{Cld}(\ensuremath{\mathbb R})}{A\text{ has infinite uniform gaps }}.$$ Clearly, $\mathcal{V}$ is open in $\operatorname{Cld}_H(\ensuremath{\mathbb R})$
and $\mathcal{V} \cap \operatorname{Bd}(\ensuremath{\mathbb R}) = \emptyset$. For each $A \in \operatorname{Cld}(\ensuremath{\mathbb R}) \setminus \operatorname{Bd}(\ensuremath{\mathbb R})$ and $\varepsilon > 0$,
let $D \subseteq A$ be a maximal $\varepsilon$-discrete subset. Then $D \in \mathcal{V}$ and $d_H(A,D) \leqslant \varepsilon$
because $D \subseteq A \subseteq \operatorname{B}(D,\varepsilon)$. Thus, $\mathcal{V}$ is dense in $\operatorname{Cld}_H(\ensuremath{\mathbb R})\setminus\operatorname{Bd}(\ensuremath{\mathbb R})$.
If $\mathcal H$ is a nonseparable component of $\operatorname{Cld}_H(\ensuremath{\mathbb R})$
and $\ensuremath{\mathbb R},[0,+\infty),(-\infty,0]\notin \mathcal H$
then $\mathcal H \subseteq \mathcal{V}$. Indeed, each $A \in \mathcal H$ is unbounded and
every component of $\ensuremath{\mathbb R} \setminus A$ is an open interval. Let $\mathcal J$ be the set of all bounded component of $\ensuremath{\mathbb R} \setminus A$. Assume that $\setof{\operatorname{diam} I}{I \in \mathcal J}$ is bounded. When $A$ is bounded below (or bounded above),
$d_H(A, [0,\infty)) < \infty$ (or $d_H(A, (-\infty,0]) < \infty$),
which implies $[0,+\infty) \in \mathcal H$ (or $(-\infty,0] \in \mathcal H$). When $A$ is not bounded below nor above,
$d_H(A,\ensuremath{\mathbb R}) < \infty$,
which implies $\ensuremath{\mathbb R} \in \mathcal H$. Therefore, $\setof{\operatorname{diam} I}{I \in \mathcal J}$ is unbounded. In particular, $A$ has infinite uniform gaps.
Due to Theorem A in \cite{KuSaY},
every component of $\operatorname{Cld}_H(\ensuremath{\mathbb R})$ is an AR,
hence it is contractible. Since a contractible $\ell_2(2^{\aleph_0})$-manifold
is homeomorphic to $\ell_2(2^{\aleph_0})$,
Theorem \ref{non-sep-compon} above follows from the following theorem:
\begin{tw} The open dense subset $\mathcal{V}$ of\/ $\operatorname{Cld}_H(\ensuremath{\mathbb R})$
is an $\ell_2(2^{\aleph_0})$-manifold. \end{tw}
\begin{proof} It suffices to show that
each $A_0 \in \mathcal{V}$ has an open neighborhood $\mathcal{U}\subseteq\mathcal{V}$
which is an $\ell_2(2^{\aleph_0})$-manifold. In this case,
$\mathcal{U}$ is a completely metrizable ANR
because it is an open set in a completely metrizable ANR $\operatorname{Cld}_H(\ensuremath{\mathbb R})$. Due to Toru{\'n}czyk characterization of $\ell_2(2^{\aleph_0})$-manifold
\cite{To81} (cf.\ \cite{To85}),
we have to show that $\mathcal{U}$ has the following two properties: \begin{romanenume} \item
For each maps $f : [0,1]^n \times 2^\omega \to \mathcal{U}$
and $\alpha : \mathcal{U} \to (0,1)$,
there exists a map $g : [0,1]^n \times 2^\omega \to \mathcal{U}$ such that
$d_H(g(z),f(z)) < \alpha(f(z))$ for each $z \in [0,1]^n \times 2^\omega$
and $\{g[[0,1]^n \times \{x\}] : x \in 2^\omega\}$ is discrete in $\mathcal{U}$; \item
For any finite-dimensional simplicial complexes $K_n$, $n \in \omega$,
with $\operatorname{card} K_n \leqslant 2^{\aleph_0}$,
for every maps $f : \bigoplus_{n\in\omega} |K_n| \to \mathcal{U}$
and $\alpha : \mathcal{U} \to (0,1)$,
there exists a map $g : \bigoplus_{n\in\omega} |K_n| \to \mathcal{U}$ such that
$d_H(g(z),f(z))$ $ < \alpha(f(z))$ for each $z \in \bigoplus_{n\in\omega} |K_n|$
and $\{g[|K_n|] : n \in \omega\}$ is discrete in $\mathcal{U}$. \end{romanenume} In the above, $2^\omega$ is the discrete space of all functions
of $\omega$ to $2 = \{0,1\}$. To this end,
it suffices to prove the following: \begin{itemize} \item
For each map $\alpha : \mathcal{U} \to (0,1)$,
there exist maps $f_x : \mathcal{U} \to \mathcal{U}$, $x \in 2^\omega$,
such that $d_H(f_x(A),A) < \alpha(A)$ for every $A \in \mathcal{U}$
and $\{f_x[\mathcal{U}] : x \in 2^\omega\}$ is discrete. \end{itemize}
Fix $A_0 \in \mathcal{V}$ and choose open intervals $I_0,I_1,\dots$
such that $\operatorname{diam} I_n \geqslant \delta$, $A_0 \cap I_n = \emptyset$
and $\operatorname{bd} I_n \subseteq A_0$ (i.e., $\inf I_n,\ \sup I_n \in A_0$)
for every $n\in\omega$. Taking a subsequence if necessary,
we may assume that either $\sup I_n < \inf I_{n+1}$ for every $n\in\omega$
or $\inf I_n > \sup I_{n+1}$ for every $n\in\omega$. Because of similarity,
we may assume that the first possibility occurs.
Choose intervals $[a_n,b_n] \subseteq I_n$, $n\in\omega$,
so that $b_n - a_n > \delta/4$, \begin{gather*} \inf_{n\in\omega}\operatorname{dist}(a_n,\ensuremath{\mathbb R} \setminus I_n)
= \inf_{n\in\omega}(a_n - \inf I_n) > \delta/4 \text{ and}\\ \inf_{n\in\omega}\operatorname{dist}(b_n,\ensuremath{\mathbb R} \setminus I_n)
= \inf_{n\in\omega}(\sup I_n - b_n) > \delta/4. \end{gather*}
\noindent Observe that if $A \in \operatorname{Cld}_H(\ensuremath{\mathbb R})$ and $d_H(A,A_0) < \delta/4$
then $A \cap (b_{n-1},a_n) \not= \emptyset$
for every $n \in \omega$,
where $b_{-1} = -\infty$. For each $A \in \operatorname{Cld}_H(\ensuremath{\mathbb R})$ with $d_H(A,A_0) < \delta/4$,
we can define $$r_n(A) = \max(A \cap (b_{n-1},a_n)),\ n\in\omega.$$ For each $A, A' \in \operatorname{Cld}_H(\ensuremath{\mathbb R})$
with $d_H(A,A_0), d_H(A',A_0) < \delta/4$,
we have
$$|r_n(A) - r_n(A')| \leqslant d_H(A,A').$$ Indeed, without loss of generality, we may assume that $r_n(A) < r_n(A')$. Then, the open interval $(r_n(A),b_n)$ contains no points of $A$
and $r_n(A') \in (r_n(A),b_n)$. Since $b_n - r_n(A') > \delta/2$ and $$r_n(A') - r_n(A)
\leqslant |r_n(A') - r_n(A_0)| + |r_n(A) - r_n(A_0)| < \delta/2,$$
we have $|r_n(A') - r_n(A)| \leqslant d_H(A,A')$. Then, it follows that \begin{align*} \inf_{n\in\omega}(a_n - r_n(A)) - d_H(A,A')
&\leqslant \inf_{n\in\omega}(a_n - r_n(A')) \\
&\leqslant \inf_{n\in\omega}(a_n - r_n(A)) + d_H(A,A'). \end{align*} This means that $A \mapsto \inf_{n\in\omega}(a_n - r_n(A))$ is continuous. Since $r_n(A_0) = \inf I_n$,
we have $\inf_{n\in\omega}(a_n - r_n(A_0)) > \delta/4$. Thus, $A_0$ has the following open neighborhood: $$\mathcal{U} = \setof{A\in\operatorname{Cld}_H(\ensuremath{\mathbb R})}{d_H(A,A_0) < \delta/4,\
\inf_{n\in\omega}(a_n - r_n(A)) > \delta/4} \subseteq \mathcal{V}.$$
Now, for each map $\map \alpha\mathcal{U}{(0,1)}$,
we define a map $\beta : \mathcal{U} \to (0,1)$ as follows: $$\beta(A) = \min\big\{\tfrac12\alpha(A),\ \tfrac14\delta - d_H(A,A_0),\
\inf_{n\in\omega}(a_n - r_n(A)) - \tfrac14\delta\big\}.$$ Given a sequence $x = (x(n))_{n\in\omega} \in 2^\omega$, let $$f_x(A) = A \cup \bigcup_{n\in\omega}\big(r_n(A) +
\big([0,\tfrac12\beta(A)] \cup \{\beta(A)\cdot x(n)\}\big)\big).$$
\noindent This defines a map $\map{f_x}\mathcal{U}\mathcal{U}$ which is $\alpha$-close to $\operatorname{id}$. We claim that
if $x \not= y \in 2^\omega$
then $$d_H(f_x(A),f_y(A')) \geqslant \min\big\{\tfrac14\beta(A),\tfrac14\beta(A')\big\}
\text{ for every $A, A' \in \mathcal{U}$.}$$ Indeed, assume that $x(n) = 1$, $y(n) = 0$
and let $s = \min\{\tfrac14\beta(A),\tfrac14\beta(A')\}$. Then \begin{enumerate} \item
$\max(f_x(A) \cap (b_{n-1},a_n)) = r_n(A)+\beta(A)$; \item
$f_x(A)$ has no points in the open interval
$(r_n(A)+\tfrac12\beta(A), r_n(A)+\beta(A))$; \item
$\max(f_y(A') \cap (b_{n-1},a_n)) = r_n(A')+\tfrac12\beta(A')$; \item
$[r_n(A'),r_n(A')+\beta(A')/2] \subseteq f_y(A')$. \end{enumerate} In case $r_n(A')+\tfrac12\beta(A') \geqslant r_n(A)+\beta(A)+s$ or
$r_n(A')+\tfrac12\beta(A') \leqslant r_n(A)+\beta(A)-s$,
we have $$d_H(f_x(A) \cap (b_{n-1},a_n),f_y(A') \cap (b_{n-1},a_n)) \geqslant s.$$ In case $r_n(A)+\beta(A)-s < r_n(A')+\tfrac12\beta(A') \leqslant r_n(A)+\beta(A)+s$,
since $2s \leqslant \tfrac12\beta(A')$,
we have $r_n(A') < r_n(A)+\beta(A)-s$,
hence $r_n(A)+\beta(A)-s \in f_y(A')$. Thus, it follows that $$d_H(f_x(A) \cap (b_{n-1},a_n),f_y(A') \cap (b_{n-1},a_n)) \geqslant s.$$
Finally, we show that
$\{f_x[\mathcal{U}] : x \in 2^\omega\}$ is a discrete collection of $\mathcal{U}$. If not, we have $A$, $A_i \in \mathcal{U}$ and $x_i \in 2^\omega$, $i \in \omega$,
such that $x_i \not= x_j$ if $i \not= j$,
and $f_{x_i}(A_i) \to A$ ($i \to \infty$). Then $c = \inf_{i\in\omega}\beta(A_i) = 0$. Indeed, otherwise we could find $i < j$ such that
$$d_H(f_{x_i}(A_i),A),\ d_H(f_{x_j}(A_j),A) < c/10$$ and
$\beta(A_i),\ \beta(A_j) > 4c/5$. It follows that $d_H(f_{x_i}(A_i),f_{x_j}(A_j)) < c/5$, but $$d_H(f_{x_i}(A_i),f_{x_j}(A_j)) \geqslant \min\{\beta(A)/4,\beta(A')/4\}
> c/5,$$
which is a contradiction. Thus, $\inf_{i\in\omega}\beta(A_i) = 0$. Taking a subsequence,
we may assume that $\lim_{i\to\infty}\beta(A_i) = 0$. Then $A_i \to A$ ($i \to \infty$)
because $d_H(f_{x_i}(A_i),A_i) \leqslant \beta(A_i)$. It follows that $\beta(A) = 0$,
which is a contradiction. This completes the proof. \end{proof}
Let $\mathcal{D}(X)$ be the subspace of $\operatorname{Cld}_H(X)$ consisting of
all discrete sets in $X$. It follows from the result of \cite{BaVo} that
$\mathcal{D}(X)$ is homotopy dense in $\operatorname{Cld}_H(X)$
for every almost convex metric space $X$. By the same proof,
Lemma \ref{sdegfaqqfas} can be extended to $\operatorname{Cld}_H(\ensuremath{\mathbb R^m})$.
\begin{prop}\label{h-dense-D} Assume $D\subseteq \ensuremath{\mathbb R^m}$ is such that $\ensuremath{\mathbb R^m}\setminus D$ is dense. Then $\mathcal{D}(\ensuremath{\mathbb R^m})\setminus\operatorname{Cld}(D)$ is homotopy dense in $\operatorname{Cld}_H(\ensuremath{\mathbb R^m})$. \end{prop}
Now, we consider the subspaces $\mathfrak N(\ensuremath{\mathbb R})$, $\operatorname{Nwd}(\ensuremath{\mathbb R})$, $\operatorname{Perf}(\ensuremath{\mathbb R})$
and $\operatorname{Cld}(\ensuremath{\mathbb R}\setminus\mathbb{Q})$ of $\operatorname{Cld}_H(\ensuremath{\mathbb R})$. Similarly to $\operatorname{Bd}_H(\ensuremath{\mathbb R})$,
the following can be shown:
\begin{prop}\label{non-sep-Z_sig} The sets $\operatorname{Cld}(\ensuremath{\mathbb R}) \setminus \mathfrak N(\ensuremath{\mathbb R})$, $\operatorname{Cld}(\ensuremath{\mathbb R}) \setminus \operatorname{Nwd}(\ensuremath{\mathbb R})$,
$\operatorname{Cld}(\ensuremath{\mathbb R}) \setminus \operatorname{Perf}(\ensuremath{\mathbb R})$ and $\operatorname{Cld}(\ensuremath{\mathbb R}) \setminus \operatorname{Cld}(\ensuremath{\mathbb R}\setminus\mathbb{Q})$
are \ensuremath{\Z_\sigma}-sets in the space $\operatorname{Cld}_H(\ensuremath{\mathbb R})$. \end{prop}
Due to Negligibility Theorem (\cite{AHW}, \cite{Cut})
if $M$ is an $\ell_2(2^{\aleph_0})$-manifold
and $A$ is a \ensuremath{\Z_\sigma}-set in $M$ then $M \setminus A \approx M$. Thus, combining Proposition \ref{non-sep-Z_sig}
and Theorem \ref{non-sep-compon},
we have the following:
\begin{wn} Let\/ $\mathcal H$ be a nonseparable component of\/ $\operatorname{Cld}_H(\ensuremath{\mathbb R})$
which does not contain\/ $\ensuremath{\mathbb R}$, $[0,+\infty)$, $(-\infty,0]$. Then $\mathcal H \cap \mathfrak N(\ensuremath{\mathbb R})$, $\mathcal H \cap \operatorname{Nwd}(\ensuremath{\mathbb R})$,
$\mathcal H \cap \operatorname{Perf}(\ensuremath{\mathbb R})$ and\/ $\mathcal H \cap \operatorname{Cld}(\ensuremath{\mathbb R}\setminus\mathbb{Q})$
are homeomorphic to $\ell_2(2^{\aleph_0})$. \end{wn}
\section{Open problems}
The following questions are left open.
\begin{question} In case $m > 1$,
under the only assumption that $D \subseteq \ensuremath{\mathbb R^m}$ is a dense $G_\delta$ set
and $\ensuremath{\mathbb R^m} \setminus D$ is also dense in $\ensuremath{\mathbb R^m}$,
is the pair $\pair{\operatorname{Bd}(\ensuremath{\mathbb R}^m)}{\operatorname{Bd}(D)}$
homeomorphic to $\pair\ensuremath{\Q\setminus 0}\ensuremath{\pseudoint\setminus0}$? In particular,
is the pair $\pair{\operatorname{Bd}(\ensuremath{\mathbb R}^m)}{\operatorname{Bd}(\nu^m_{m-1})}$
homeomorphic to $\pair\ensuremath{\Q\setminus 0}\ensuremath{\pseudoint\setminus0}$? \end{question}
\begin{question} Does Theorem \ref{non-sep-compon} hold
even if $\mathcal H$ contains $\ensuremath{\mathbb R}$, $[0,\infty)$ or $(-\infty,0]$? \end{question}
\begin{question} For $m > 1$,
is $\operatorname{Cld}_H(\ensuremath{\mathbb R^m}) \setminus \operatorname{Bd}(\ensuremath{\mathbb R^m})$ an $\ell_2(2^{\aleph_0})$-manifold? \end{question}
\section{Appendix}
For the convenience of readers, we give short and straightforward proofs of Propositions \ref{rthsdpio} and \ref{owteepgsdgfa}.
\begin{prop}[\ref{rthsdpio}] If $\pair Xd$ is $\sigma$-compact
then the space $\pair{\operatorname{Bd}(X)}{d_H}$ is $F_{\sigma\delta}$
in its completion $\pair{\operatorname{Bd}(\tilde X)}{d_H}$. \end{prop}
\begin{proof} Fix a countable open base $\setof{U_n}{n\in\omega}$ for $\tilde X$. Since $U_n\cap X$ is $F_\sigma$, we have $U_n\cap X=\bigcup_{k\in\nat}K_k^n$, where each $K_k^n$ is compact. Observe that, by compactness, the sets $(\tilde X\setminus K_k^n)^+$ are open in the Hausdorff metric topology. We claim that $$\operatorname{Bd}(\tilde X)\setminus \operatorname{Bd}(X)=\bigcup_{n\in\omega}\Bigl(U_n^-\cap\bigcap_{k\in\nat}(\tilde X\setminus K_k^n)^+\Bigr),$$ which shows that $\operatorname{Bd}(\tilde X)\setminus \operatorname{Bd}(X)$ is a countable union of $G_\delta$ sets. This is what we want to prove.
Assume $A\in \operatorname{Bd}(\tilde X)\setminus \operatorname{Bd}(X)$, that is, $A\ne \operatorname{cl}_{\tilde X}(A\cap X)$. Then there is $n\in\omega$ such that $U_n\cap A\ne\emptyset$ and $U_n\cap A\cap X=\emptyset$, which means that $A\in U_n^-$ and $A\in (\tilde X\setminus K_k^n)^+$ for every $k\in \nat$. Conversely, if $A\in U_n^-\cap \bigcap_{k\in\nat}(\tilde X\setminus K_k^n)^+$ then $U_n\cap A\ne\emptyset$ and $U_n\cap A\cap X=\emptyset$, so $A\ne\operatorname{cl}_{\tilde X}(A\cap X)$. \end{proof}
\begin{prop}[\ref{owteepgsdgfa}] If $\pair Xd$ is Polish then the space $\pair{\operatorname{Bd}(X)}{d_H}$ is $G_\delta$ in its completion $\pair{\operatorname{Bd}(\tilde X)}{d_H}$. \end{prop}
\begin{proof} Let $\setof{W_n}{n\in\omega}$ be a family of open subsets of $\tilde X$ such that $X=\bigcap_{n\in\omega}W_n$. Fix a countable open base $\setof{V_n}{n\in\omega}$ for $\tilde X$. We claim that \begin{equation} \operatorname{Bd}(\tilde X)\setminus \operatorname{Bd}(X)=\bigcup_{n\in\omega}\bigcup_{k\in\nat}\Bigl(V_n^-\setminus(V_n\cap W_k)^-\Bigr).\tag{$*$} \end{equation} As $V^-$ is open in the metric space $\pair{\operatorname{Bd}(\tilde X,d)}{d_H}$ whenever $V\subseteq\tilde X$ is open, it follows that $V_n^-$ is $F_\sigma$ and therefore the set on the right-hand side of ($*$) is $F_\sigma$ in $\operatorname{Bd}_H(\tilde X)$. It remains to prove ($*$).
If $A\in V_n^-\setminus(V_n\cap W_k)^-$ then we have $x\in V_n\cap A$. Since $V_n\cap (A\cap X)=\emptyset$, it follows that $x\notin \operatorname{cl}_{\tilde X}(A\cap X)$. Thus $A\notin \operatorname{Bd}(X)$. Now assume $A\in \operatorname{Bd}(\tilde X)\setminus \operatorname{Bd}(X)$, that is, $A\ne\operatorname{cl}_{\tilde X}(A\cap X)$. Then there exists an open set $U\subseteq \tilde X$ such that $U\cap A\ne\emptyset$ and $U\cap A\cap X=\emptyset$. Hence $\bigcap_{k\in\nat}A\cap U\cap W_k=\emptyset$. Note that $A\cap U$ is a Baire space because of the completeness of $\pair{\tilde X}d$. Thus, by the Baire Category Theorem, there exists $k\in\nat$ such that $A\cap U\cap W_k$ is not dense in $A\cap U$. Find a basic open set $V_n\subseteq U$ such that $V_n\cap A\ne\emptyset$ and $V_n\cap A\cap W_k=\emptyset$. Then $A\in V_n^-\setminus(V_n\cap W_k)^-$. \end{proof}
Let $\mathfrak B(X)$ denote the Borel field on a topological space $X$. Given $\mathfrak H \subseteq \operatorname{Cld}(X)$,
the {\em Effros $\sigma$-algebra\/} $\mathfrak E(\mathfrak H)$ is
the $\sigma$-algebra generated by $$\setof{U^-\cap\mathfrak H}{\text{$U$ is open in $X$}}.$$ It is well known that $\mathfrak E(\exp(X)) = \mathfrak B(\exp(X))$
for every separable metric space $X$
(see \cite[Theorem 6.5.15]{Beer}).\footnote
{$\mathfrak E(\operatorname{Cld}(X)) = \mathfrak B(\operatorname{Cld}_H(X))$
for every totally bounded separable metric space $X$
(cf.\ \cite[Hess' Theorem 6.5.14 with Theorem 3.2.3]{Beer}).} Whenever $X$ is a separable metric space
in which every bounded set is totally bounded,
we can regard $\operatorname{Bd}_H(X) \subseteq \exp(\tilde{X})$
by the identification as in \S\ref{borelclasses},
where $\tilde{X}$ is the completion of $X$. Then, we have not only $\mathfrak E(\operatorname{Bd}(X)) = \mathfrak B(\operatorname{Bd}_H(X))$
but also $\mathfrak E(\mathfrak H) = \mathfrak B(\mathfrak H)$
for $\mathfrak H \subseteq \operatorname{Bd}_H(X)$. This implies that $\mathfrak E(\mathfrak H)$ is standard
if $\mathfrak H$ is absolutely Borel (cf.\ \cite[12.B]{Ke}). The results in \S\ref{borelclasses} provide such hyperspaces $\mathfrak H$.
In relation to the results above,
we can prove the following:
\begin{prop}\label{weptjwpf} Let $X=\pair Xd$ be an analytic metric space
in which bounded sets are totally bounded. Then, the space $\operatorname{Bd}_H(X)$ is analytic. \end{prop}
\begin{proof}
The completion $\pair{\tilde X}d$ of $\pair Xd$ is a Polish space in which closed bounded sets are compact. Then $\operatorname{Bd}_H(\tilde X,d)=\exp(\tilde X)$ is Polish. Fix a countable open base $\ciag U$ for $\tilde X$. Since $X$ is analytic, there exists a tree $\setof{X_s}{s\in\nat^{<\nat}}$ of closed subsets of $\tilde X$ such that $X=\bigcup_{f\in\nat^\nat}\bigcap_{n\in\omega}X_{f\restriction n}$, which is the result of the Suslin operation on the family $\setof{X_s}{s\in\nat^{<\nat}}$ (e.g.\ see \cite[Lemma 11.7]{Jech}). We may assume that $X_s\supseteq X_t$ whenever $s\subseteq t$. Let $W_s=\operatorname{B}(X_s,2^{-|s|})$, where $|s|$ denotes the length of the sequence $s$. Then $\operatorname{cl} W_s\supseteq \operatorname{cl} W_t$ whenever $s\subseteq t$. Moreover, $\bigcap_{n\in\omega}X_{f\restriction n}=\bigcap_{n\in\omega}\operatorname{cl} W_{f\restriction n}$ for each $f\in\nat^\nat$. We claim that \begin{equation} \operatorname{Bd}(X,d)=\bigcap_{k\in\nat}\bigcup_{f\in\nat^\nat}\bigcap_{n\in\omega}\Bigl((\operatorname{Bd}(\tilde X,d)\setminus U_k^-)\cup(U_k\cap W_{f\restriction n})^-\Bigr),\tag{$\sharp$} \end{equation} where, as usual, we regard $\operatorname{Bd}(X,d)\subseteq\operatorname{Bd}(\tilde X,d)$, via the embedding $A\mapsto \operatorname{cl}_{\tilde X}A$. The above formula ($\sharp$) shows that $\operatorname{Bd}(X,d)$ can be obtained from $\operatorname{Bd}(\tilde X,d)$ by using the Suslin operation and countable intersection, which shows that it is analytic. It remains to prove ($\sharp$).
Fix $A\in\operatorname{Bd}(\tilde X,d)\setminus \operatorname{Bd}(X,d)$. Then $A\ne\operatorname{cl}(A\cap X)$ and hence there exists $k\in\nat$ such that $A\in U_k^-$ and $\operatorname{cl} U_k\cap A\cap X=\emptyset$. Then $A \notin \operatorname{Bd}(\tilde X,d)\setminus U_k^-$. For each $f\in\nat^\nat$, we have $$A\cap\operatorname{cl} U_k\cap\bigcap_{n\in\omega}\operatorname{cl} W_{f\restriction n} =A\cap\operatorname{cl} U_k\cap\bigcap_{n\in\omega}X_{f\restriction n}=\emptyset.$$ By compactness, there is $n\in\omega$ such that $A\cap\operatorname{cl} U_k\cap \operatorname{cl} W_{f\restriction n}=\emptyset$, hence $A\notin(U_k\cap W_{f\restriction n})^-$.
Now assume that $A\in\operatorname{Bd}(\tilde X,d)$ does not belong to the right-hand side of ($\sharp$), that is, there exists $k\in\nat$ such that $A\in U_k^-$ and for every $f\in\nat^\nat$ there is $n\in\omega$ with $A\notin(U_k\cap W_{f\restriction n})^-$. In particular, $A\cap U_k\cap\bigcap_{n\in\omega}X_{f\restriction n}=\emptyset$ for every $f\in\nat^\nat$ and consequently $U_k\cap A\cap X=\emptyset$. On the other hand, $A\cap U_k\ne\emptyset$. Thus it follows that $A\ne \operatorname{cl}_{\tilde X}(A\cap X)$, which means $A\notin\operatorname{Bd}(X,d)$. \end{proof}
\end{document} | arXiv |
Classical Physics Quantum Physics Quantum Interpretations
Special and General Relativity Atomic and Condensed Matter Nuclear and Particle Physics Beyond the Standard Model Cosmology Astronomy and Astrophysics Other Physics Topics
Special and General Relativity
How does parallel transportation relates to Rieman Manifold?
Thread starter TimeRip496
einstein field equation general relativity relativity riemann tensor riemannian geometry
TimeRip496
Basically the video talk about how moving from A to A'(which is basically A) in an anticlockwise manner will give a vector that is different from when the vector is originally in A in curved space.
$$[(v_C-v_D)-(v_B-v_A)]$$ will equal zero in flat space.
$$[(v_C-v_D)-(v_B-v_A)]-[(v_C-v_D)-(v_B-v'_A)]=v_A-v'_A$$
$$v_A-v'_A=dv$$
That is the difference, that is going to be the change in the vector as it is parallel transport around that parallelogram. In flat space, dv will be zero as the vector will be parallel transported and come back in the same magnitude and direction. but in curved space, there will be a difference, not in length but in angle of the vector.
$$v_C-v_D=\frac{∂v}{∂x^μ}dx^μv⇒∇_μdx^μv$$
change in vector between c and d is the gradient multiplied by the distance times the value of the vector. We need to use the covariant derivatives so instead of using the partial derivative, we use this ∇ instead.
$$(v_C-v_D)-(v_B-v'_A)=∇_μdx^μ ∇_νdx^ν v$$
I don't really understand this part. Like so just by derivative can we actually change the vector twice?
$$v_A-v'_A=dV=dx^μdx^ν v [∇_v.∇_μ]$$
And the commutator between the two covariant derivatives will give the Riemannian Tensor, which can be regarded as the Ricci tensor.
So how does parallel transportation relates to the Riemannian manifold? Does the Riemannian manifold exhibit the above properties? Do I need to learn about differential geometry in order to understand Riemannian manifold?
Related Special and General Relativity News on Phys.org
Insights Author
Think of the Earth. Stand where the Greenwich meridian crosses the equator and hold a vector pointing North.
Walk up the meridian to the north pole, parallel transporting your vector. That means always keeping it pointing the same way it was a pace ago. When you get to the pole you should find that it's pointing straight down the 180° meridian.
Start again at the equator but walk along the equator to 90° east. Your vector should still be pointing north. Now head up the meridian until you reach the pole. Your vector will be pointing along the 90° west meridian. It is not parallel to what you got from the first procedure.
This is a general feature of curved spaces (it's not an effect of meeting at the pole or anything). The notion of "the same direction at two different points" is path dependent.
The covariant derivative measures the change in a vector along an infinitesimal displacement - it parallel translates the vector. Using it four times around an infinitesimal parallelogram is the "at one point" formalisation of my triangular trip round a sphere. The resulting information is encoded in the Riemann tensor, which describes the curvature of the space (or spacetime, in GR) at that point.
Reactions: PeterDonis and vanhees71
pervect
Ibix said:
There's an issue here, I think. When we regard the surface of the Earth as curved, we are talking about not a 3 dimensional manifold, but a 2 dimensional manifold, which are the points on the manifold are the points on the surface of the Earth.
So if you move in a great circle starting at a point on the equator, a vector pointing north on the equator will continue to point north as we move towards the north pole on the 2d manifold - at least until we reach the north pole, where we run into a coordinate singularity and the whole description of north/south/east/west becomes ambiguous. (Every direction from the North pole is "south"). If we stop just short of the north pole, though, a vector that starts out pointing north will continue to point north as we parallel transport it.
PeterDonis
pervect said:
If we stop just short of the north pole, though, a vector that starts out pointing north will continue to point north as we parallel transport it.
No, it won't. Ibix was careful to specify that the second parallel transport was along the equator, which is a geodesic. For this special case, yes, the vector will continue to point north.
However, if you start at a point on the Greenwich meridian just a little short of the north pole, and parallel transport a vector that is tangent to that meridian along a geodesic, i.e., a great circle passing through the point you start from, you will not be parallel transporting along a latitude line, because except for the equator, no latitude lines are geodesics. Of course there are an infinite number of possible geodesics through any point, but the usual requirement when looking for curvature is to pick the geodesic that is orthogonal to the one you are on, i.e., to the Greenwich meridian. That geodesic will be a great circle that curves southward--it will be just a little bit offset from the 90-east/90-west meridian. Parallel transporting your vector along this geodesic will cause it to start pointing east of north. (If you parallel transport it all the way back down to the equator, it will be pointing just a little bit north of due east.)
I think you've been misled by the coordinate representation here. Think what our intrepid explorers would see as they approached a flag marking the pole: an approximately Euclidean plane with the two of them approaching from perpendicular directions. Each one is carrying an arrow, initially pointing at the flag. However they move, if they keep their arrows pointing in the same direction, their arrows will be perpendicular when they meet. If one moves along a line of constant latitude he describes a circle centred on the flag, but would be cheating outrageously to keep his arrow pointing at the flag through that maneuver - which is what you are suggesting, I think.
I'm cheating a bit by treating the area as truly Euclidean, but not by much because of local flatness. In fact the choice of route for the final few meters until they meet will affect the outcome slightly, even if they do remain close to the pole - just not by much.
The usual way to describe the process as keeping the vector pointing in the same direction, as many books do, can be confusing when first encountered - if it's always pointing the same way, then by definition it won't change.
Because the space/spacetime is curved and the notion of "the same direction" is a bit like the equivalence principle, I think. It's an intuition that breaks down if examined precisely enough, but has a well-defined mathematical expression. In this case, the notion of covariant derivatives being zero along certain paths.
Oh, I misread "When you get to the pole you should find that it's pointing straight down the 180° meridian." as "pointing straight down".
Let's try the following description and see if people a) agree and b) like it. If you parallel transport a vector along a geodesic, it maintains a constant angle to the geodesic in question.
So, for instance, if a vector is initially pointing directly along a geodesic, and is of unit length, we say the vector is a tangent vector to the geodesic. One might also invert this description if one is already familiar with tangent vectors - a tangent vector can be regarded as a unit vector that points "along the curve", serving as a formal definition of what "pointing along the curve" means.
A geodesic parallel transports its own tangent vector by definition. So if a vector pointing along the geodesic, it will continue to do so as you parallel transport it along the geodesic.
If the vector is initially at some angle to the geodesic, the angle is a constant as one parallel transports the vector along the geodesic.
This description as written doesn't tell us how to parallel transport vectors along a non-geodesics, but hopefully it's good enough to work out the example in which the only parallel transport occurs along geodesics, and see that the vector is in fact rotated.
If you parallel transport a vector along a geodesic, it maintains a constant angle to the geodesic in question.
More precisely, the inner product of the tangent vector to the geodesic and the vector in question remains constant. Yes, this is true; parallel transport preserves inner products.
Misunderstandings all round, then. Sorry.
I do agree with your description. However, I think that it might be introducing things that are a bit beyond someone like the OP, who asks questions like "Do I need to learn about differential geometry in order to understand Riemannian manifold?" Perhaps I could revise my description to start the journey at the pole, avoiding the directional ambiguities at the pole when they meet up. I could also explicitly make clear, for those in the know, that I am aware that I'm cheating (in a sense) by picking a case where intuitive notions of "in the same direction" match up in both coordinate and local representations, and the fact that I'm taking advantage of our intuition of how ##S^2## embedded in ##R^3## works.
I should also belatedly reference Carroll's lecture notes, which is where I first saw the example. I lost my first version of the post and apparently forgot to put the reference into version 2.
Related Threads for: How does parallel transportation relates to Rieman Manifold?
Riemannian Geometry is free of Torsion. Why use it for General Relativity?
A Riemannian Manifolds and Local Cartesian Coordinates
A Did Einstein not believe that general relativity geometries
A Riemann Tensor
Is a Pseudo-Riemann Metric Intrinsic to General Relativity?
Does Einstein's General Relativity need to be adjusted for the Higgs field?
How to determine extension of a pseudo-Riemannian manifold
Einsteins General Relativity
B What path does matter take after entering a black hole?
A Local flatness pet peeve
A Validity of theoretical arguments for Unruh and Hawking radiation
I What happens in the area between black holes before they collide
I Question on high speed astrophysics — when an object goes near of the speed of light | CommonCrawl |
B. Parent • AE23815 Heat Transfer
2017 Heat Transfer Midterm Exam
When is the best time for you?
Tue 18 April 8:15 -- 10:15 0
Wed 19 April 18:00 -- 20:00 0
Thu 20 April 8:15 -- 10:15 0
Fri 21 April 10:00 -- 12:00 8
Fri 21 April 16:00 -- 18:00 11
Poll ended at 6:22 pm on Monday April 10th 2017. Total votes: 70. Total voters: 28.
Friday April 21st 2017
NO NOTES OR BOOKS;
USE HEAT TRANSFER TABLES THAT WERE DISTRIBUTED;
ANSWER ALL 4 QUESTIONS; ALL QUESTIONS HAVE EQUAL VALUE.
Question #1
The temperature distribution in a certain plane wall is $$ \frac{T-T_1}{T_2-T_1}=C_1+C_2 x^2 + C_3 x^3$$ where $T_1$ and $T_2$ are the temperatures on each side of the wall. If the thermal conductivity of the wall is constant and the wall thickness is $L$, derive an expression for the heat generation per unit volume as a function of $x$, the distance from the plane where $T=T_1$. Let the heat generation be $S_0$ at $x=0$.
You are working for KAI (Korea Aerospace Industries) and are in charge of the design of the cooling system of the altimeter installed in the cockpit of the A50 fighter jet. The altimeter requires 50 Watts of power to operate and has dimensions of 10 cm$~\times~$10 cm$~\times~$10 cm. The design of the cooling system should be such that it keeps the back surface of the altimeter below 60$^\circ$C while minimizing additional weight. Recalling the theory learned in your Heat Transfer course that you took several years ago at PNU, you decide to cool the altimeter by installing on its backside 10 aluminum fins with a thickness of 2 mm. The fins are rectangular, have a width equal to the one of the altimeter, and are long enough that the tips can be considered insulated. Knowing that the air behind the instrument panel is at a temperature of $20^\circ$C with an associated convective heat transfer coefficient of $h=12~$W/m$^2\cdot^\circ$C, find the value of the fin length that matches the design constraints. Take into consideration the fact that the convective heat transfer coefficient is not known accurately and may vary by as much as 30%.
Consider the following fin made of copper joining two objects each with the temperature $T_0$:
For $W=1$ m, $t=0.01$ m, $L=3$ m, and given the copper properties $c=393~{\rm J/kgK}$, $k=386~{\rm W/mK}$, $\rho=9000~{\rm kg/m^3}$, and knowing that $T_0=300^\circ$C and $T_\infty=20^\circ$C, and that the conductive heat transfer between the fin and the objects cooled corresponds to: $$ (q_x)_{x=0}=-(q_x)_{x=L}=1700~{\rm W} $$ do the following:
(a) Find $h$, the convective heat transfer coefficient over all exposed surfaces of the fin.
(b) Find the fin efficiency $\eta_{\rm fin}$.
Consider a 0.0245m-radius sphere made in yellow-pine wood initially at a temperature of $200^\circ$C. The sphere is cooled with cold air at a temperature of $T_\infty=20^\circ$C and a convective heat transfer coefficient $h=3~{\rm W/m^2 K}$. Knowing that after a time $\Delta t$, the sphere loses 13.114 kJ to the environment, do the following:
(a) Find the time elapsed, $\Delta t$, in seconds.
(b) At a time of $t=\Delta t$, find the center temperature of the sphere in Celcius.
(c) At a time of $t=\Delta t$, find the temperature on the surface of the sphere in Celcius.
1. $S_0-\frac{3x S_0}{L}-\frac{6 x k}{L^3}(T_2-T_1)$
2. 7.9 cm.
3. 5 W/m$^2$K, 0.4.
4. 5853 s, 92$^\circ$C, 76$^\circ$C.
PDF 1✕1 2✕1 2✕2
$\pi$ | CommonCrawl |
Universal geometric algebra
In mathematics, a universal geometric algebra is a type of geometric algebra generated by real vector spaces endowed with an indefinite quadratic form. Some authors restrict this to the infinite-dimensional case.
The universal geometric algebra ${\mathcal {G}}(n,n)$ of order 22n is defined as the Clifford algebra of 2n-dimensional pseudo-Euclidean space Rn, n.[1] This algebra is also called the "mother algebra". It has a nondegenerate signature. The vectors in this space generate the algebra through the geometric product. This product makes the manipulation of vectors more similar to the familiar algebraic rules, although non-commutative.
When n = ∞, i.e. there are countably many dimensions, then ${\mathcal {G}}(\infty ,\infty )$ is called simply the universal geometric algebra (UGA), which contains vector spaces such as Rp, q and their respective geometric algebras ${\mathcal {G}}(p,q)$.
UGA contains all finite-dimensional geometric algebras (GA).
The elements of UGA are called multivectors. Every multivector can be written as the sum of several r-vectors. Some r-vectors are scalars (r = 0), vectors (r = 1) and bivectors (r = 2).
One may generate a finite-dimensional GA by choosing a unit pseudoscalar (I). The set of all vectors that satisfy
$a\wedge I=0$
is a vector space. The geometric product of the vectors in this vector space then defines the GA, of which I is a member. Since every finite-dimensional GA has a unique I (up to a sign), one can define or characterize the GA by it. A pseudoscalar can be interpreted as an n-plane segment of unit area in an n-dimensional vector space.
Vector manifolds
A vector manifold is a special set of vectors in the UGA.[2] These vectors generate a set of linear spaces tangent to the vector manifold. Vector manifolds were introduced to do calculus on manifolds so one can define (differentiable) manifolds as a set isomorphic to a vector manifold. The difference lies in that a vector manifold is algebraically rich while a manifold is not. Since this is the primary motivation for vector manifolds the following interpretation is rewarding.
Consider a vector manifold as a special set of "points". These points are members of an algebra and so can be added and multiplied. These points generate a tangent space of definite dimension "at" each point. This tangent space generates a (unit) pseudoscalar which is a function of the points of the vector manifold. A vector manifold is characterized by its pseudoscalar. The pseudoscalar can be interpreted as a tangent oriented n-plane segment of unit area. Bearing this in mind, a manifold looks locally like Rn at every point.
Although a vector manifold can be treated as a completely abstract object, a geometric algebra is created so that every element of the algebra represents a geometric object and algebraic operations such as adding and multiplying correspond to geometric transformations.
Consider a set of vectors {x} = Mn in UGA. If this set of vectors generates a set of "tangent" simple (n + 1)-vectors, which is to say
$\forall x\in M^{n}:\exists I_{n}(x)=x\wedge A(x)\mid I_{n}(x)\lor M_{n}=x$
then Mn is a vector manifold, the value of A is that of a simple n-vector. If one interprets these vectors as points then In(x) is the pseudoscalar of an algebra tangent to Mn at x. In(x) can be interpreted as a unit area at an oriented n-plane: this is why it is labeled with n. The function In gives a distribution of these tangent n-planes over Mn.
A vector manifold is defined similarly to how a particular GA can be defined, by its unit pseudoscalar. The set {x} is not closed under addition and multiplication by scalars. This set is not a vector space. At every point the vectors generate a tangent space of definite dimension. The vectors in this tangent space are different from the vectors of the vector manifold. In comparison to the original set they are bivectors, but since they span a linear space—the tangent space—they are also referred to as vectors. Notice that the dimension of this space is the dimension of the manifold. This linear space generates an algebra and its unit pseudoscalar characterizes the vector manifold. This is the manner in which the set of abstract vectors {x} defines the vector manifold. Once the set of "points" generates the "tangent space" the "tangent algebra" and its "pseudoscalar" follow immediately.
The unit pseudoscalar of the vector manifold is a (pseudoscalar-valued) function of the points on the vector manifold. If i.e. this function is smooth then one says that the vector manifold is smooth.[3] A manifold can be defined as a set isomorphic to a vector manifold. The points of a manifold do not have any algebraic structure and pertain only to the set itself. This is the main difference between a vector manifold and a manifold that is isomorphic. A vector manifold is always a subset of Universal Geometric Algebra by definition and the elements can be manipulated algebraically. In contrast, a manifold is not a subset of any set other than itself, but the elements have no algebraic relation among them.
The differential geometry of a manifold[3] can be carried out in a vector manifold. All quantities relevant to differential geometry can be calculated from In(x) if it is a differentiable function. This is the original motivation behind its definition. Vector manifolds allow an approach to the differential geometry of manifolds alternative to the "build-up" approach where structures such as metrics, connections and fiber bundles are introduced as needed.[4] The relevant structure of a vector manifold is its tangent algebra. The use of geometric calculus along with the definition of vector manifold allow the study of geometric properties of manifolds without using coordinates.
See also
• Conformal geometric algebra
References
1. Pozo, José María; Sobczyk, Garret. Geometric Algebra in Linear Algebra and Geometry
2. Chapter 1 of: [D. Hestenes & G. Sobczyk] From Clifford Algebra to Geometric Calculus
3. Chapter 4 of: [D. Hestenes & G. Sobczyk] From Clifford Algebra to Geometric Calculus
4. Chapter 5 of: [D. Hestenes & G. Sobczyk] From Clifford Algebra to Geometric Calculus
• D. Hestenes, G. Sobczyk (1987-08-31). Clifford Algebra to Geometric Calculus: a Unified Language for mathematics and Physics. Springer. ISBN 902-772-561-6.
• C. Doran, A. Lasenby (2003-05-29). "6.5 Embedded Surfaces and Vector Manifolds". Geometric Algebra for Physicists. Cambridge University Press. ISBN 0-521-715-954.
• L. Dorst, J. Lasenby (2011). "19". Guide to Geometric Algebra in Practice. Springer. ISBN 0-857-298-100.
• Hongbo Li (2008). Invariant Algebras And Geometric Reasoning. World Scientific. ISBN 981-270-808-1.
| Wikipedia |
Pseudonyms in Mathematics
In: 2020, Joseph Malkevitch
Why would there be pseudonyms for the publication of research papers in mathematics? One possible motive might be… that a group of people collaborated on a piece of work and wanted to simplify the multiple authorship to a single name. Another reason appears to be playfulness or whimsy.
Joseph Malkevitch
York College (CUNY)
Are you familiar with the novels of Currer Bell or Ellis Bell? Many people who have enjoyed reading Jane Eyre (1847) and Wuthering Heights (1848) are unaware that the authors of these books at the time they were published were known as Currer Bell and Ellis Bell.
In the 19th century there were several examples of ultimately famous women writing under a male pseudonym. Among the most notable were George Eliot as the name for Mary Ann Evans and George Sands as the name for Amantine Lucile Aurore Dupin. (Eliot attended courses on mathematics and physics in London and Geneva and incorporated mathematical themes in her novels.)
Figure 1 (George Eliot, aka Mary Ann Evans, as painted by Alexandre-Louis-François d'Albert-Durade. Image courtesy of Wikipedia.)
Few who have not studied English literature will recognize the names of Currer Bell, Ellis Bell and Acton Bell as the names under which Charlotte (1816-1855), Emily (1818-1848) and Anne (1820-1849) Brontë published their work in their own times! Another interesting example, this time of a man writing under a pseudonym in the 19th century, was Lewis Carroll, whose true name was C.L. Dodgson.
Figure 2 (C.L. Dodgson, otherwise known as Lewis Carroll, courtesy of Wikipedia.)
Dodgson was a distinguished mathematician but not as distinguished as he was to become by being an author! Few people know Dodgson for his mathematics but many have read the book or seen the movie version of Alice in Wonderland! In recent times, Dodgson's mathematical fame has been revived in association with his appealing method of using ranked ballots to decide elections.
While not a pseudonym, the family name that James Joseph Sylvester (1814-1897) was born into was "Joseph," not Sylvester. The Sylvester family practiced Judaism and Sylvester's older brother decided to change the family name from Joseph to Sylvester to avoid some of the anti-semitism that plagued British Jews at the time.
Figure 3 (James Joseph Sylvester, courtesy of Wikipedia)
Sylvester was a student at Cambridge but when it came time to graduate and to get his Master's degree regulations would have required him to sign the "Articles," pledging allegiance to the Church of England, something he refused to do. It was his good fortune that Trinity College in Dublin was willing to grant him his degree based on his work at Cambridge because they were more open minded about religion! Ireland was a country dominated by Catholicism at the time while Britain had an established church because Henry VIII had broken with Catholicism and set in motion years of religious turmoil by bringing a Protestant reform movement to England. However, a big part of his motive was that the Vatican would not allow him to divorce his first wife, Catherine of Aragon, who was Catholic so that he could marry Anne Boleyn, who gave birth to the woman who would become Queen Elizabeth I!
Women who wanted to pursue mathematics, or more generally an education, had limited opportunities in the 19th century. It was not until 1869 that Girton College at Cambridge admitted women and, thus, offered a path for women who wanted to pursue mathematics to study the subject in a university setting. Winifred Edgerton Merrill became the first American woman to earn a PhD in mathematics, which she earned in 1886 from Columbia University. But rather amazingly, the granting of degrees to women by Cambridge was not allowed until 1948. The situation for women and Jews had similarities. Despite his growing reputation as a mathematician, Sylvester was not able to get an appointment at Oxford because of his religion. His famous stays in America were attempts to have his talents recognized by academic institutions but for complex reasons, some of which seem to have been related to his personality, he was unable to get a stable long term position in the United States. Ironically, he returned from America to Britain because he was offered the Savilian Professorship in Geometry at Oxford University, because the rules forbidding Jews to have that position had changed. (For more about Sylvester's career and the careers of prominent women mathematicians such as Emmy Noether and Sofya Kovalevskaya, the first woman to earn a doctorate in mathematics, see the April 2015 Feature Column on Mathematical Careers.)
Figure 4 (Sofya Kovalevskaya, courtesy of Wikipedia.)
Why would there be pseudonyms for the publication of research papers in mathematics? One possible motive might be that a research paper might seem to a community used to a particular person's "impressive" work, to be less impressive and one might want to protect one's reputation by publishing under a pseudonym. Another reason might be that a group of people collaborated on a piece of work and wanted to simplify the multiple authorship to a single name. Another reason appears to be playfulness or whimsy. While there are many pseudonyms that were used for groups of mathematicians, the best known is Bourbaki. Bourbaki represented a group of mathematicians, initially primarily operating out of France, but having members eventually from many countries, to put mathematics on a "sound" foundation rather than look at its separate pieces as the total edifice. Bourbaki stood for the idea of showing with "hindsight" how to structure the vast river of mathematical knowledge in a way that built organically from a well structured foundation. I will not attempt to look at the members of Bourbaki here or its many accomplishments. Many of the members of Bourbaki had distinguished careers as individuals. I will not take on the history and mathematical accomplishments of Bourbaki, not because they are vast, though they are, but in favor of pseudonyms for groups who work in mathematical areas that have good track records with looking at mathematics that can be understood with a relatively limited mathematics background. My goal here is to mention two relatively recent groups of mathematicians who published under an anonymous name. I will say some things about why they might have done this and call attention to some of the research work that as individuals these people accomplished.
So I will look at in turn look at the work of Blanche Descartes and G.W. Peck. Blanche started "her" work earlier that "G.W.," so I will start with her.
Blanche Descartes
What associations do you have with the name Blanche Descartes? Perhaps Blanche Descartes was Rene Descartes's (1596-1650) wife or daughter? In fact Descartes never married but he did have a daughter Francine who died at the age of 5. But seemingly the name Blanche Descartes has no direct connection with Descartes, the great French mathematician and philosopher. Blanche Descartes was a pseudonym initially for joint work by Rowland Brooks, Cedric Smith, Arthur Stone and William Tutte. The four of them met as undergraduate students at Cambridge University in 1936 and were members of the Trinity Mathematical Society. The photo below shows them attending a meeting of the Trinity Mathematical Society, which as you notice, has no female members. Female students were not allowed to study at Trinity College until much later, the first women arriving in 1976!
Figure 5 (The Trinity Mathematical Society, photo courtesy of Wikipedia.)
Trinity College at Cambridge, where Isaac Newton had been an undergraduate and later a Fellow, was especially associated with mathematics. The four decided to use the name Blanche Descartes to publish some initial joint work they did. It has been argued that "Blanche" derives from letters involved in their names. Using first letters of Bill (William), Leonard (Brooks middle name), Arthur and Cedric, one gets BLAC and with some "padding" this becomes: BLAnChe. Perhaps Descartes involves a play on the phrase 'carte blanche' and since they all were studying mathematics for different reasons, perhaps a reference to Rene Descartes, who also dabbled in things other than mathematics.
One early question that the four looked at has to do with a problem which has come to be known as "squaring the square." The problem involves taking a square and decomposing (tiling) it into smaller squares of different side lengths. One can also consider the more general question of when a rectangle can be tiled with squares of different lengths. Figure 6 shows an example, discovered by A. Duijvestijn.
Figure 6 (A square decomposed into squares of all different sizes. This one has the smallest number of possible squares in any such decomposition. Image courtesy of Wikipedia.)
This tantalizing question attracted much interest over the years and has been shown to have unexpected connections to mathematical ideas that seem to have no connection with tilings. Some interest in this problem occurred in the "recreational" mathematics community. It is not exactly clear why Brooks, Smith, Stone and Tutte chose to use a pseudonym but perhaps it might have had to do with their concern that this question was of less mathematical "importance" than topics their teachers at Cambridge considered worth students investing time in investigating. Mathematics is full of examples that were considered recreational at one time but have been shown to be rooted in deeper mathematical ideas. The squared-square problem is an example of such a problem.
For each of the members of Blanche Descartes I will indicate some information about them and try to provide an example of some mathematics associated with them. All of them were to achieve some measure of fame and acclaim under their "real" names, sometimes in limited venues but in Tutte's case on a big stage.
William Tutte (1917-2002)
Figure 7 (Photo of William Tutte. Courtesy of Wikipedia.)
Although Tutte was born in England, his academic career was spent largely in Canada at the University of Waterloo. This university has a complex footprint of the presence of mathematics:
Department of Applied Mathematics.
Department of Combinatorics and Optimization.
David R. Cheriton School of Computer Science.
Department of Pure Mathematics.
Department of Statistics and Actuarial Science.
When Tutte first went to Canada, he did so at the invitation of H.S.M. Coxeter to join the faculty at the University of Toronto. Not long after the University of Waterloo was founded, Tutte moved there and was associated with the Department of Combinatorics and Optimization which at the present time has about 30 members. The reason why his work in England before taking up residency at Waterloo is less well known is that he was part of the team that was assembled in England at Bletchley Park. Even after the war the people who worked at Bletchley Park were not allowed to speak of their work there until quite recently. This team included Alan Turing, who is justly famous for his work on breaking the Enigma machine cipher. However, even after the Enigma ciphers were being read, a code known as Fish, used by the German army and for messages sent by leaders in Berlin to field commanders, had not been broken. Tutte was one of a team that managed to read Fish even though, unlike the case for Enigma, there was no physical machine in the possession of the code breakers to help them with breaking the code. In essence Tutte managed to figure out how the machine used to send messages in Fish worked by reconstructing the exact operation of the machine even without having seen the machine! When this was accomplished, it became possible to decipher Fish messages for the remainder of the war without the Germans realizing that their codes were being read. This work was done before Tutte completed his doctorate.
When WWII ended, Tutte returned as a graduate student to Cambridge in 1945 to work on his doctorate. During this period he produced an example related to the famous four-color theorem which is known today as the Tutte graph in his honor. In this context, a graph is a collection of points (called vertices) and edges, which can be curved or straight, that join up vertices. There are various definitions of graphs but the one preferred by Tutte allows vertices to be joined to themselves (these edges are called loops) and allows pairs of distinct vertices to be joined by more than one edge. We see in Figure 8 on the right the Tutte graph, put together using three copies of the configuration on the left, known as a Tutte triangle. This graph has three edges at every vertex, that is, it is 3-valent or degree 3, is planar (can be drawn in the plane so that edges meet only at vertices) and is three-connected. When a graph has been drawn in the plane so that edges meet only at vertices, the graph is called a plane graph. 3-connected means that for any two vertices $u$ and $v$ in the graph one can find at least 3 paths from $u$ to $v$ that have only the vertices $u$ and $v$ in common. What is special about the Tutte graph is that there is no simple tour of its vertices that visits each vertex once and only once in a simple circuit, a tour usually known as a Hamilton circuit. If it were true that every 3-valent 3-connected graph in the plane had a hamilton circuit there would be a simple proof of the four-color theorem (every graph drawn in the plane can be face colored with 4 or fewer colors). (The coloring rule is that any two faces that share an edge would have to get different colors.)
Figure 8 (A Tutte triangle on the left and three Tutte triangles assembled to form a plane, 3-valent, 3-connected graph with no Hamilton circuit. Images courtesy of Wikipedia.)
While the Tutte graph is not the smallest planar, degree 3, 3-connected graph which has no simple closed curve tour of all of its vertices (such a tour is called a Hamilton circuit or HC), it is relatively easy to see why it can't have an HC. This fact follows from that one can show that if an HC visits the vertices of a Tutte triangle then the HC must enter using one of the two edges shown at the bottom and exit the Tutte triangle using the edge at the top. For those interested in applications of mathematics, the problem known as the Traveling Salesperson Problem (TSP) requires finding a minimum cost Hamilton circuit in a graph which has non-zero weights on its edges. Finding routes for school buses, group taxis, or when you run errands involves problems of this kind. Finding solutions to the TSP for large graphs seems to require rapidly growing amounts of computation as the number of sites to be visited (the vertices of the graph) grows.
In addition to his famous example of a graph not having a Hamilton circuit Tutte also showed a very appealing theorem about when a graph must have a Hamilton circuit.:
Theorem (W. T. Tutte).
Every 4-connected plane graph has a hamiltonian circuit.
The 4-connected condition means that given any pair of vertices $u$ and $v$ in the graph there must be at least 4 paths from $u$ to $v$ that have only $u$ and $v$ in common. In particular, this means that the graph must have at least 4 edges at each vertex. Figures show a very regular 4-connected plane graph the graph of the Platonic Solid known as the icosahedron and a much more irregular graph. Can you find a Hamilton circuit in each of these graphs? While there are algorithms (procedures) for finding an HC in a plane 4-connected graph that run relatively fast, the problem of finding an HC in an arbitrary graph is known to be computationally hard.
Figure 9 (A 4-connected planar graph. This is the graph of the regular icosahedron. Image courtesy of Wikipedia.)
Figure 10 (A planar 4-connected graph which is not very symmetric.)
Cedric A. B. Smith (1917-2002)
While Smith studied mathematics at Cambridge and published some research work in mathematics, his career path did not involve becoming an academic mathematician associated with a mathematics department. Rather he pursued a career as a statistician and worked at the Galton Laboratory of University College, University of London. Some of Smith's work was related to genomics, helping to locate where particular genes were on a chromosome.
Like Tutte, Smith made an easy-to-state contribution to the theory of Hamilton circuits. Clearly, having conditions that show a family of graphs must have an HC or providing examples of graphs where no HC exists is also intriguing. But it is also natural to ask if there might be families of graphs where graphs might have exactly one HC. Smith was able to provide an answer to this question for the important class of graphs where every vertex has exactly three edges at a vertex.
Theorem.
Every 3-valent cubic graph has a even number of circuits that pass through each vertex once and only once (Hamiltonian circuit).
Because this graph has one Hamilton circuit tour…
Figure 11 (The dark edges show a Hamilton circuit is a 3-valent (cubic) graph.)
…by Smith's Theorem it must have another!
Rowland Leonard Brooks (1916-1993)
Rowland Brooks is most well known for a theorem about coloring graphs that is named for him. Let me begin with an application and show how the theorem of Brooks makes it possible to get insight into such situations.
Suppose we have a collection of committees, say, committees of students and faculty at a college. The collection of individuals who are on some committee will be denoted by S. The committee names are a, b, …, h and a table (Figure 12) indicates with an X when a pair of committees has one or more members of S on both committees. It would be nice to be able to schedule the 8 committees into as few hourly time slots as possible so that no two committees that have a common member meet at the same time. Committees with no common members could meet at the same time. A person can't be at the meeting of two committees the person serves on if those committees meet at the same time.
Figure 12 (A table where when two committees have one or more people on both committees these committees should not meet at the same time. A conflict is indicated with an X.)
We can display the conflict information geometrically using a graph by having a dot for each committee and joining two vertices representing committees with an edge if these committees can't meet at the same time because they have members in common.
Figure 13 (A conflict graph for 8 committees. Committees joined by an edge should have their meetings at different times. The graph can be vertex colored with 4 colors.)
The vertex coloring problem for graphs assigns a label to each vertex, called a color, with the requirement that two vertices jointed by an edge get different colors. We could assign a different color to each dot (vertex) in Figure 13, which would mean that we would use 8 time slots for the committee meetings. The minimum number of colors needed to color the vertices of a graph is called the chromatic number of a graph. Brooks was able to find a way to get what is often a good estimate for the chromatic number of a graph.
Brook's Theorem:
If G is a graph which is not a complete graph or a circuit of odd length, then the chromatic number of G is at most (less than or equal to) the degree (valence) equal to the largest degree (valence) that occurs in G.
For a general graph it is a difficult computational problem to determine the chromatic number, but Brook's Theorem often allows one to get a good estimate quickly. In particular, if a graph has a million vertices, no matter how complex the graph, if it has maximum valence 5, it will not require more than 5 colors (though the chromatic number may be less).
Can you find a coloring of the vertices in the graph in Figure 13 that improves over the estimate given by Brook's Theorem?
Arthur Stone (1916-2000)
Arthur Stone eventually made his way to the United States and taught for many years at the University of Rochester. His wife Dorothy (Dorothy Maharam) was also a mathematician and his son David (deceased 2014) and daughter Ellen also became mathematicians.
While Stone contributed to mathematics in a variety of theoretical ways perhaps he is best known for his invention of what are known as flexagons. The hexaflexagon shown in Figure 14 can be folded from the triangles shown in Figure 15.
Figure 14 (An example of a hexaflexagon, image courtesy of Wikipedia.)
Figure 15 (A template for a hexaflexagon, image courtesy of Wikipedia.)
While the members of Blanche Descartes published several articles, most of these publications were in fact the work of William Tutte. A more recent example of a group of mathematicians who published together under one name will now be treated.
G.W. Peck
G. W. Peck's publications can be found here. Unlike Blanche Descartes, whose members are now all deceased, of the 6 mathematicians who have sometimes written under the pseudonym of G.W. Peck, three of the six people involved are still alive (2020). What associations do you have with this name? Perhaps you might think of the distinguished actor Gregory Peck, but Eldred Gregory Peck (April 5, 1916 – June 12, 2003), the actor, did not write under the name G.W. Peck. The individuals involved in G.W. Peck's publications were Ronald Graham, Douglas West, George Purdy, Paul Erdős, Fan Chung, and Daniel Kleitman.
Let me make a few brief comments about each of the members of this remarkable collaboration of distinguished mathematicians in turn. As individuals they illustrate the remarkably varied ways that individuals have mathematical careers.
Ron (Ronald) Graham (1935-2020)
Figure 16 (A photo of Ronald Graham, courtesy of Wikipedia.)
Some people perhaps know Ron Graham best for his juggling and studying mathematical problems associated with juggling. However, before his recent death he was one of America's best known research mathematicians. Some of Graham's fame beyond the mathematics community itself was that Martin Gardner, a prolific popularizer of mathematics, often wrote about ideas which were called to his attention by Ron Graham. But somewhat unusually for mathematicians known for their theoretical work, Ron Graham also had a career as a distinguished applied mathematician. He had a long career as a mathematician at Bell Laboratories where he worked at times under the direction of Henry Pollak, who in addition to his work at Bell Laboratories also was a President of NCTM and MAA. Graham during his career was President of the American Mathematical Society and MAA. He also was head of the Mathematics Section (the position no longer exists) of the New York Academy of Sciences. During his long stay at Bell Laboratories he championed examples of discrete mathematical modeling. This included popularizing the use of graphs and digraphs to solve problems involved in scheduling machines in an efficient manner. He also called attention to the way that problems about packing bins with weights had connections to machine scheduling problems.
Graham published on a vast array of topics ranging from very technical to expository articles. His many contributions to mathematics can be sampled here.
Douglas West (1953- )
Figure 17 (A photo of Douglas West, courtesy of Wikipedia.)
In addition to his many research papers, West has published several comprehensive books in the areas of discrete mathematics and combinatorics. He is also noteworthy in maintaining and collecting lists of unsolved problems in the general area of discrete mathematics. In recent years he has helped edited the Problems Section of the American Mathematical Monthly.
George Purdy (1944-2017)
Purdy made many contributions to discrete geometry. He was particularly interested in questions related to arrangements of lines and the Sylvester-Gallai Theorem.
Paul Erdős (1913-1996)
The best known author of this group was Paul Erdős (1913-1996) whose career was marked by not having a specific educational or industrial job. For much of his life Erdős traveled between one location and another to work privately with individuals, and/or give public lectures where he often promoted easy-to-state but typically hard-to-solve problems in geometry, combinatorics and number theory.
Figure 18 (A photo of Paul Erdős, courtesy of Wikipedia.)
Here is a sample problem in discrete geometry that Paul Erdős called attention to, whose investigation over the years has sprouted many new techniques and ideas. It has also resulted in new questions as some aspect of the initial problem got treated. Suppose that $P$ is a finite set of $n$ distinct points in the plane with the distance between the points being computed using Euclidean distance. One must be careful to specify what distance is to be used because, for example, one could instead compute the result using another distance function, say, taxicab distance. Suppose $D(n)$ denotes the set of numbers one gets for distances that occurs between pairs of points in $P$. Paul Erdős raises questions such as:
What is the largest possible number of elements in $D(n)$?
What is the smallest possible number of elements in $D(n)$?
Can all of the numbers in $D(n)$ be integers?
Can all of the integers in $D(n)$ be perfect squares?
Such problems have come to be known as distinct distance questions. Question (a) is quite easy. However, the question (b) is not fully understood even today. Initially Erdős was concerned with the behavior of the smallest number of values that could occur in D(n) as n got larger and larger. As tools for getting insights into problems has grown as well as interesting examples with particular properties, many new questions have come to be asked. For example. if the points of the original set lie in equal numbers on two disjoint circles what is the smallest number of distinct distances that can occur?
Here is a problem in this spirit to contemplate. In each of Figures 19 and 20 we have a configuration of 20 points, together with some of the line segments that join up these points, when we think of the diagrams as graphs. Figure 19 shows a polyiamond, a configuration of equilateral triangles that meet edge to edge while Figure 20 shows a polyomino, congruent squares which meet edge to edge.
Figure 19 (A polyiamond with 20 vertices.)
Figure 20 (A polyomino with 20 vertices)
Question: Investigate the number of distinct distances and integer distances that can occur in each of the configurations in Figures 19 and 20. Can you formulate interesting questions to ask about distances based on these two configurations and your thinking about them?
Fan Chung (1949- )
Figure 21 (A photo of Ron Graham, Paul Erdős and Fan Chung, photo courtesy of Wikipedia.)
Fan Chung is an extremely distinguished mathematician. She was born in Taiwan but did her graduate studies in America, and Herbert Wilf was her doctoral thesis advisor. She was the wife of Ronald Graham, with whom she wrote many joint papers. Like her husband, she worked in both the industrial world and in academia. She worked at Bell Laboratories and at Bellcore after the breakup of the Bell System. Her research covers an unusually broad range of topics, but includes especially important work in number theory, discrete mathematics, and combinatorics. Fan Chung and Ron Graham both made important contributions to Ramsey Theory. Fan Chung still teaches at the University of California, San Diego.
Daniel Kleitman (1934- )
Figure 22 (A photo of Daniel Kleitman, courtesy of Wikipedia.)
The person who perhaps has most often published under the G.W. Peck moniker has been Daniel Kleitman, whose long career has been associated with MIT. Many of his publications are in the area of combinatorics and discrete mathematics, in particular results about partially ordered sets.
To the extent that there are individuals who can't study or publish mathematics because of discrimination, hopefully we can all work to tear down these barriers. Mathematical progress requires all the help it can get and to deny people the fulfillment that many who get pleasure from doing mathematics have is most unfortunate. While in the past pseudonyms were sometimes chosen to avoid prejudicial treatment, we can hope this will seem less and less necessary in the future.
Those who can access JSTOR can find some of the papers mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic information and reviews of some of these materials. Some of the items above can be found via the ACM Digital Library, which also provides bibliographic services.
Appel, K., and W. Haken, J. Koch, Every Planar Map is Four Colorable. I: Discharging. Illinois J. Math. 21 (1977) 429-490.
Appel, K. and Haken, W. 1977 Every Planar Map is Four-Colorable, II: Reducibility. Illinois J. Math. 21, 491-567.
Ball, Derek Gordon. Mathematics in George Eliot's Novels. PhD thesis, University of Leicester, 2016.
Brooks, R., and C. Smith, A. Stone, W. Tutte, The Dissection of Rectangles into Squares, Duke Mathematical Journal, 7 (1940) 312-40.
Chiba, N. and T. Nishizeki, "The hamiltonian cycle problem is linear-time solvable for 4-connected planar graphs." Journal of Algorithms 10.2 (1989): 187-211.
Chung, Fan RK, Lectures on spectral graph theory, CBMS Lectures, Fresno, 92 (1996): 17-21.
Chung, Fan, and R. Graham, Sparse quasi-random graphs, Combinatorica 22 (2002): 217-244.
Descartes, Blanche, Network colorings, Math. Gaz. 32 (1948) 67-69.
Erdős, .P, and A. H. Stone. On the structure of linear graphs, Bull. Amer. Math. Soc 52.(1946) 1087-1091.
Erdős, Paul, and George Purdy. "Some extremal problems in geometry." Discrete Math. 1974.
Graham, R., The Combinatorial Mathematics of Scheduling, Scientific American 238 (1978), 124-132.
Graham, R., Combinatorial Scheduling Theory, in Mathematics Today: Twelve Informal Essays, ed. L. Steen, Springer-Verlag, N.Y. (1978), 183-211.
Henle, M. and B. Hopkins, eds. Martin Gardner in the twenty-first century. Vol. 75. American Mathematical Soc., Providence, 2012.
Kleitman, D.. On an extremal property of antichains in partial orders. The LYM property and some of its implications and applications, Combinatorics, 2 (1974) 77-90.
Kleitman, Daniel J., and Douglas B. West. "Spanning trees with many leaves." SIAM Journal on Discrete Mathematics 4.1 (1991): 99-106.
Purdy, George. "Two results about points, lines and planes." Discrete mathematics 60 (1986): 215-218.
Robertson, N. and D. Sanders, P. Seymour, R. Thomas, New Proof of the Four Colour Theorem. Electronic Research Announcement, Amer. Math. Soc. 2, (1996) 17-25.
Solymosi, J, and C.. Tóth, Distinct distances in the plane, Discrete & Computational Geometry 25 (2001): 629-634.
Smith, C. and S Abbott, The story of Blanche Descartes. Mathematical Gazette (2003) 23-33.
Stone, A. H., Paracompactness and product spaces, Bulletin of the American Mathematical Society 54 (1948): 977-982.
Szekely, Crossing numbers and hard Erdős problems in discrete geometry, Combinatorics, Probability and Computing 6 (1997): 353-358.
Tutte, W. T., Squaring the Square, Mathematical Games column, ed. M. Gardner, Scientific American, Nov. 1958.
West, D., Introduction to Graph Theory. Vol. 2. Upper Saddle River, NJ: Prentice hall, 1996.
In: 2020, Tony Phillips
Tagged: quadratic formula
The prehistory of the quadratic formula
Completing the square is the essential ingredient in the generation of our handy quadratic formula. And for all the years before the formula existed, that was how quadratic equations were solved…
Tony Phillips
Quadratic equations have been considered and solved since Old Babylonian times (c. 1800 BC), but the quadratic formula students memorize today is an 18th century AD development. What did people do in the meantime?
The difficulty with the general quadratic equation ($ax^2+bx+c=0$ as we write it today) is that, unlike a linear equation, it cannot be solved by arithmetic manipulation of the terms themselves: a creative intervention, the addition and subtraction of a new term, is required to change the form of the equation into one which is arithmetically solvable. We call this maneuver completing the square.
Here is how students learn about the maneuver today:
The coefficient $a$ is non-zero (or else the equation is linear) so you can divide through by $a$, giving $x^2 +\frac{b}{a}x + \frac{c}{a} =0$. Now comes the maneuver: you add and subtract $\frac{b^2}{4a^2}$, giving $x^2+\frac{b}{a}x +\frac{b^2}{4a^2} -\frac{b^2}{4a^2}+ \frac{c}{a} =0.$ Now the first three terms are a perfect square: $x^2+\frac{b}{a}x +\frac{b^2}{4a^2} = (x+\frac{b}{2a})^2$: you have completed the first two terms to a square. Instead of having both the unknown $x$ and its square $x^2$ in the equation, you have only the second power $(x+\frac{b}{2a})^2$ of a new unknown. Now arithmetic can do the rest. The equation gets rearranged as $(x+\frac{b}{2a})^2 = \frac{b^2}{4a^2} – \frac{c}{a}$ so $x+\frac{b}{2a} = \pm\sqrt{\frac{b^2}{4a^2} – \frac{c}{a}}$ $= \pm\sqrt{\frac{b^2-4ac}{4a^2}}=\pm\frac{\sqrt{b^2-4ac}}{2a}$ giving the solution $x=-\frac{b}{2a}\pm\frac{\sqrt{b^2-4ac}}{2a}$ =$\frac{-b\pm\sqrt{b^2-4ac}}{2a}$. This expression of the solution to the equation as an algebraic function of the coefficients is known as the quadratic formula.
So completing the square is the essential ingredient in the generation of our handy quadratic formula. And for all the years before the formula existed, that was how quadratic equations were solved. The maneuver was originally explicitly geometric: numbers were identified with areas, and a certain L-shaped polygon was completed to a geometric square. With the development of algebra the job could be done by manipulation of symbols, but for about a millennium the symbolic solution was always buttressed by a geometric argument, as if the algebra alone could not be trusted. Meanwhile our conception of the numbers that the symbols represented evolved, but very slowly. Negative solutions were considered "false" even by Descartes (1596-1650), but once they were accepted, the identification of equations with actual geometric problems eventually evaporated: in Euler's Algebra (1770) it is gone without a trace.
In this column I will explore some episodes in this history, starting with the Old Babylonians and ending with an occurrence of square-completion in an important calculation in 20th century physics.
In Old Babylonian Mathematics
The connection to Plimpton 322
In Islamic mathematics
Aryabhata and Brahmagupta
In Fibonacci's Liber Abaci
In Simon Stevin's L'Arithmétique
Descartes' La Géometrie
In Maria Agnesi's Instituzioni Analitiche …
In Leonhard Euler's Vollständige Anleitung zur Algebra
In 20th century physics
The Yale Babylonian Collection's tablet YBC 6967, as transcribed in in Neugebauer and Sachs, Mathematical Cuneiform Texts, American Oriental Society, New Haven, 1986. Size 4.5 $\times$ 6.5cm.
The Old Babylonian tablet YBC 6967 (about 1900 BC) contains a problem and its solution. Here is a line-by-line literal translation from Jöran Friberg, A Remarkable Collection of Babylonian Mathematical Texts, (Springer, New York, 2007).
The igi.bi over the igi 7 is beyond.
The igi and the igi.bi are what?
7 the the igi.bi over the igi is beyond
to two break, then 3 30.
3 30 with 3 30 let them eat each other, then 12 15
To 12 15 that came up for you
1, the field, add, then 1 12 15.
The equalside of 1 12 15 is what? 8 30.
8 30 and 8 30, its equal, lay down, then
3 30, the holder,
from one tear out,
to one add.
one is 12, the second 5.
12 is the igi.bi, 5 the igi.
The Old Babylonians used a base-60 floating-point notation for numbers, so that the symbol corresponding to 1 can represent for example 60 or 1 or 1/60. In the context of YBC 6967, the reciprocal numbers, the igi and the igi.bi, have product 1 0. Their difference is given as 7.
Here is a diagram of the solution to the YBC 6967 problem, adapted from Eleanor Robson's "Words and Pictures: New Light on Plimpton 322" (MAA Monthly, February 2002, 105-120). Robson uses a semi-colon to separate the whole and the fractional part of a number, but this is a modern insertion for our convenience. The two unknown reciprocals are conceptualized as the sides of a rectangle of area (yellow) 1 0 [or 60 in decimal notation]. A rectangle with one side 3;30 [$= 3\frac{1}{2}$] is moved from the side of the figure to the top, creating an L-shaped figure of area 1 0 which can be completed to a square by adding a small square of area 3;30 $\times$ 3;30 = 12;15 [$= 12\frac{1}{4}$]. The area of the large square is 1 0 + 12;15 = 1 12;15 [$ = 72\frac{1}{4}$] with square root 8;30 [$=8\frac{1}{2}$]. It follows that our unknown reciprocals must be 8;30 + 3;30 = 12 and 8;30 − 3;30 = 5 respectively.
In modern notation, the YBC 6967 problem would be $xy = 60, x-y = 7$, or $x^2-7x-60=0$. In this case the term to be added in completing the square is $\frac{b^2}{4a^2}=\frac{49}{4}=12\frac{1}{4}$, corresponding exactly to the area of the small square in the diagram.
This tablet, and the several related ones from the same period that exist in various collections (they are cataloged in Friberg's book mentioned above), are significant because they hold a piece of real mathematics: a calculation that goes well beyond tallying to a creative attack on a problem. It should also be noted that none of these tablets contains a figure, even though Old Babylonian tablets often have diagrams. It is as if those mathematicians thought of "breaking," "laying down" and "tearing out" as purely abstract operations on quantities, despite the geometrical/physical language and the clear (to us) geometrical conceptualization.
The Old Babylonian tablet Plimpton 322 (approx. $13 \times 9$ cm) is in the Columbia University Library. Image courtesy of Bill Casselman (larger image). For scholarly discussions of its content see Friberg's book or (available online) Eleanor Robson's article referenced above.
There have been many attempts at understanding why Plimpton 322 was made and also how that particular set of numbers was generated. It has been described as a list of Pythagorean triples and as an exact sexagesimal trigonometry table. An interpretation mentioned by R. Creighton Buck and attributed to D. L. Voils (in a paper that was never published) is "that the Plimpton tablet has nothing to do with Pythagorean triplets or trigonometry but, instead, is a pedagogical tool intended to help a mathematics teacher of the period make up a large number of igi-igibi quadratic equation exercises having known solutions and intermediate solution steps that are easily checked" (igi, igi.bi problems involve a number and its reciprocal; the one on YBC 6769 is exactly of this type).
In the solution of the problem of YBC 6769, we squared 3;30 to get 12;15, added 1 to get $1\, 12;15$ and took the square root to get 8;30. Then the solutions were 8;30 + 3;30 and 8;30 $-$ 3;30. So to set up an igi, igi.bi problem which will work out evenly we need a square like 12;15 which, when a 1 is placed in the next base-60 place to the left, becomes another square.
Now the first column of Plimpton 322 contains exactly numbers of this type. For example, in row 5, the first undamaged row, the column I entry is $48\, 54\, 01\, 40$ [=10562500] with square root $54\, 10$ [=3250]. Adding 1 gives $1\, 48\, 54\, 01\, 40$ [=23522500] with square root $80\, 50$ [=4850]. The corresponding igi, igi.bi problem would ask for two reciprocals differing by two times $54\, 10$, i.e. $1\, 48\, 20$; the answer would be $80\, 50 + 54\, 10 = 2\, 15\, 00$ and $80\, 50 – 54\, 10 = 26\, 40$.
Unfortunately, neither the problem on YBC 6967 nor any of the five igi, igi.bi problems recorded by Friberg from tablet MS 3971 correspond to parameters on Plimpton 322. It is possible that lines on the bottom and reverse of the tablet mean that it was supposed to be extended to additional 20 or so rows, where those missing parameters would appear. In fact, none of the proposed explanations is completely satisfactory. As Robson remarked, "The Mystery of the Cuneiform Tablet has not yet been fully solved."
Solving quadratic equations by completing the square was treated by Diophantus (c.200-c.285 AD) in his Arithmetica, but the explanations are in the six lost books of that work. Here we'll look at the topic as covered by Muhammad ibn Musa, born in Khwarazm (Khiva in present-day Uzbekistan) and known as al-Khwarizmi, on his Compendium on Calculation by Completion and Reduction dating to c. 820 AD. (I'm using the translation published by Frederic Rosen in 1831). Negative numbers were still unavailable, so al-Khwarizmi, to solve a general quadratic, has to consider three cases. In each case he supposes a preliminary division has been done so the coefficient of the squares is equal to one ("Whenever you meet with a multiple or sub-multiple of a square, reduce it to the entire square").
"roots and squares are equal to numbers" [$x^2 + bx = a$]
"squares and numbers are equal to roots" [$x^2 +a = bx$]
"roots and numbers are equal to squares" [$x^2=bx+a$]
Case 1. al-Khwarizmi works out a specific numerical example, which can serve as a template for any other equation of this form: "what must be the square which, when increased by ten of its roots, amounts to thirty-nine."
"The solution is this: you halve the number of the roots, which in the present case equals five. This you multiply by itself; the product is twenty-five. Add this to thirty-nine, the sum is sixty-four. Now take the root of this, which is eight, and subtract from it half the number of the roots, which is five; the remainder is three. This is the root of the square which you sought for; the square itself is nine."
Note that this is exactly the Old Babylonian recipe, updated from $x(x+7)=60$ to $x^2 +10x = 39$, and that the figure Eleanor Robson uses for her explanation is essentially identical to the one al-Khwarizmi gives for his second demonstration, reproduced here:
"We proceed from the quadrate AB, which represents the square. It is our next business to add to it the ten roots of the same. We halve for this purpose the ten, so it becomes five, and construct two quadrangles on two sides of the quadrate AB, namely, G and D, the length of each of them being five, as the moiety of the ten roots, whilst the breadth of each is equal to a side of the quadrate AB. Then a quadrate remains opposite the corner of the quadrate AB. This is equal to five multiplied by five: this five being half of the number of roots which we have added to each side of the first quadrate. Thus we know that the first quadrate, which is the square, and the two quadrangles on its sides, which are the ten roots, make together thirty-nine. In order to complete the great quadrate, there wants only a square of five multiplied by five, or twenty-five. This we add to the thirty-nine, in order to complete the great square SH. The sum is sixty-four. We extract its root, eight, which is one of the sides of the great quadrangle. By subtracting from this the same quantity which we have before added, namely five, we obtain three as the remainder. This is the side of the quadrangle AB, which represents the square; it is the root of this square, and the square itself is nine."
Case 2. "for instance, 'a square and twenty-one in numbers are equal to ten roots of the same square."
Solution: Halve the number of the roots; the moiety is five. Multiply this by itself; the product is twenty-five. Subtract from this the twenty-one which are connected with the square; the remainder is four. Extract its root; it is two. Subtract this from the moiety of the roots, which is five; the remainder is three. This is the root of the square you required, and the square is nine.
Here is a summary of al-Khwarizmi's demonstration. The last of the four figures appears (minus the modern embellishments) in Rosen, p. 18.
The problem set up geometrically. I have labeled the unknown root $x$ for modern convenience. The square ABCD has area $x^2$, the rectangle CHND has area $10x$, the rectangle AHNB has area 21, so $x^2+21=10x$.
The side CH is divided in half at G, so the segment AG measures $5-x$. The segment TG parallel to DC is extended by GK with length also $5-x$. Al-Khwarizmi says this is done "in order to complete the square."
The segment TK then measures 5, so the figure KMNT, obtained by drawing KM parallel to GH and adding MH, is a square with area 25.
Measuring off KL equal to KG, and drawing LR parallel to KG leads to a square KLRG. Since HR has length $5-(5-x)=x$ the rectangles LMHR and AGTB have the same area, so the area of the region formed by adding LMHR to GHNT is the same as that of the rectangle formed by adding AGTB to GHNT, i.e. 21. And since that region together with the square KLRG makes up the square KMNT of area 25, it follows that the area of KLRG is $25-21=4$, and that its side-length $5-x$ is equal to 2. Hence $x=3$, and the sought-for square is 9.
Al-Khwarizmi remarks that if you add that 2 to the length of CG then "the sum is seven, represented by the line CR, which is the root to a larger square," and that this square is also a solution to the problem.
Case 3. Example: "Three roots and four simple numbers are equal to a square."
Solution: "Halve the roots; the moiety is one and a half. Multiply this by itself; the product is two and a quarter. Add this to the four; the sum is six and a quarter. Extract its root; it is two and a half. Add this to the moiety of the roots, which was one and a half; the sum is four. This is the root of the square, and the square is sixteen."
As above, we summarize al-Khwarizmi's demonstration. The last figure minus decoration appears on Rosen, p. 20.
We represent the unknown square as ABDC, with side-length $x$. We cut off the rectangle HRDC with side-lengths 3 and $x$. Since $x^2 = 3x + 4$ the remaining rectangle ABRH has area 4.
Halve the side HC at the point G, and construct the square HKTG. Since HG has length $1\frac{1}{2}$, the square HKTG has area $2\frac{1}{4}$.
Extend CT by a segment TL equal to AH. Then the segments GL and AG have the same length, so drawing LM parallel to AG gives a square AGLM. Now TL = AH = MN and NL = HG = GC = BM, so the rectangles MBRN and KNLT have equal area, and so the region formed by AMNH and KNLT has the same area as ABRH, namely 4. It follows that the square AMLG has area $4+2\frac{1}{4}=6\frac{1}{4}$ and consequently side-length AG = $2\frac{1}{2}$. Since GC = $1\frac{1}{2}$, it follows that $x = 2\frac{1}{2} + 1\frac{1}{2} = 4$.
The study of quadratic equations in India dates back to Aryabhata (476-550) and Brahmagupta (598-c.665). Aryabhata's work on the topic was referenced at the time but is now lost; Brahmagupta's has been preserved. He gives a solution algorithm in words (in verse!) which turns out to be equivalent to part of the quadratic formula—it only gives the root involving $+$ the radical. Here's Brahmagupta's rule with a translation, from Brahmagupta as an Algebraist (a chapter of Brahmasphutasiddhanta, Vol. 1):
"The quadratic: the absolute quantities multiplied by four times the coefficient of the square of the unknown are increased by the square of the coefficient of the middle (i.e. unknown); the square-root of the result being diminished by the coefficient of the middle and divided by twice the coefficient of the square of the unknown, is (the value of) the middle."
There is only the rule, and no indication of how it was derived.
The third section of chapter 15 of the Liber Abaci, written by Fibonacci (Leonardo of Pisa, c. 1170-c. 1245) in 1202, concerns "Problems of Algebra and Almuchabala," referring directly to the Arabic words for "Completion and Reduction" in the title of al-Khwarizmi's compendium. I am using the translation of Liber Abaci by L. E. Sigler, Springer, 2002. In that section, a short introduction presenting the methods is followed by a collection of over a hundred problems.
Fibonacci follows al-Khwarizmi in distinguishing three "modes" of compound problems involving a square ("census") and its roots (his second mode corresponds to al_Khwarizmi's case 3 and vice-versa).
"the first mode is when census plus roots are equal to a number." The example Fibonacci gives, "the census plus ten roots is equal to 39" is inherited directly from al Khwarizmi. But the demonstration he gives is different: he starts with a square $abcd$ of side length greater than 5, marks off lengths of 5 in two directions from one corner and thus partitions $abcd$ into a square of area 25, two rectangles and another square, which he identifies with the unknown census. Then the two rectangles add up to 10 times the root, but since "the census plus 10 roots is equal to 39 denari" the area of $abcd$ must be $25+39=64$, so its side length is 8, our root is $8-5=3$ and the census is 9. The figure and the calculation are essentially the same as al-Khwarizmi's, but the "completion" narrative has disappeared.
"let the census be equal to 10 roots plus 39 denari." Here the example is new.
(Figure from Sigler, p. 557). Fibonacci starts by representing the census as a square $abcd$ of side length larger than 10. He draws a segment $fe$ parallel to $ab$ so that the segments $fd$ and $ec$ each have length 10 and assigns a midpoint $g$ to $ec$. So the ten roots will be equal to the area of $fecd$, and the area of the rectangle $abfe$, which is also $fe\times eb$, is 39. But $fe=bc$, therefore $be \times bc = 39$. "If to this is added the square of the line segment $eg$, then 64 results for the square of the line segment $bg$." [$bg^2 = be^2 + 2be\times eg +eg^2$ $= be\times(be + 2eg)+ eg^2 = be\times(be+eg+gc)+eg^2$ $= be\times bc +eg^2 = 39+25=64$]. So $bg = 8$ and $bc = bg+gc = 8+5=13$ is the root of the sought census, and the census is 169.
"when it will occur that the census plus a number will equal a number of roots, then you know that you can operate whenever the number is equal to or less than the square of half the number of roots." Here Fibonacci goes beyond al-Khwarizmi in remarking that (in modern notation) $x^2+c=bx$ has no solution if $c>(b/2)^2$. The example he gives is "let the census plus 40 equal 14 roots."
(Figures from Sigler, pp. 557, 558). Fibonacci gives two construction for the two positive roots. In each case he starts with a segment $ab$ of length 14, with center at $g$. He chooses another point $d$ dividing $ab$ into two unequal parts.
For the first root he constructs a square $dbez$ over $db$—this will be the census— and extends $ez$ to $iz$ matching $ab$. The rectangle $abzi$ now represents 14 roots, so that $adei$ measures (in modern notation) $14x – x^2 = 40$. Now $dg = 7-x$ so $dg^2 = 49-14x+x^2$ and $dg^2 + 40 = 49$. It follows that $dg=3$ and the root $x$ is $db=gb-gd=7-3=4$.
For the second root the square $adlk$ is over $ad$; the segment $kl$ is extended to $km$ matching $ab$, and $lmbd$ measures $14x – x^2 = 40$. Now $gd = x-7$ so again $gd^2 = x^2-14x+49 = 9$ and $gd=3$. This time the root is $x=ag+gd = 10$.
Stevin's L'Arithmétique was published in 1594 (excerpts here are from the 1625 edition, supervised by Albert Girard). Among other things it treated quadratic equations. Stevin, following Bombelli, used a notation for powers that turns out to be intermediate between cossic notation (which used different symbols for the unknown, its square, its cube etc.) and the exponents that started with Descartes. For Stevin, the unknown was represented by a 1 in a circle, its square by a 2 in a circle, etc., and the numerical unit by a 0 in a circle. Here let us write ${\bf 1}$ for 1 in a circle, etc. He also had an idiosyncratic way of expressing the solution to an equation in one unknown, using the "fourth proportional." For example, his setting of the problem the problem of finding $x$ such that $x^2=4x+12$ could be schematized as $${\bf 2} : 4\,{\bf 1} + 12\,{\bf 0} : : {\bf 1} : ?$$ (he would actually write: given the three terms for the problem — the first ${\bf 2}$, the second $4\,{\bf 1} + 12\,{\bf 0}$, the third ${\bf 1}$, find the fourth proportional). As Girard explains, the "proportion" is equality. So the problem should be read as: "if ${\bf 2} = 4\,{\bf 1} + 12\,{\bf 0}$, then ${\bf 1} =$ what?"
Some writers have claimed "[Stevin's Arithmetic] brought to the western world for the first time a general solution of the quadratic equation …" but in fact there is only this step towards modern notation, his use of minus signs in his equations and his admission of irrational numbers as coefficients to separate him from Fibonacci. Notably he also considers separately three types of quadratic equations [his examples, in post-Descartes notation]
"second term ${\bf 1} + {\bf 0}$" [$x^2=4x+12$]
"second term $-{\bf 1} + {\bf 0}$" [$x^2 = -6x + 16$]
"second term ${\bf 1} – {\bf 0}$" [$x^2=6x-5$].
and does not entertain negative solutions, so for the first equation he gives only $6$ and not $-2$, for the second he gives $2$ and not $-8$; for the third he gives the two (positive) solutions $5$ and $1$.
Stevin gives a geometric justification for his solution of each type of equation. For example for the first type his solution is:
Half of the 4 (from the $4\,{\bf 1}$) is 2
Its square 4
To the same added the given ${\bf 0}$, which is 12
Gives the sum 16
Its square root 4
To the same added the 2 from the first line 6
I say that 6 is the sought-for fourth proportional term.
Here is his geometrical proof:
Figure from Stevin, L'Arithmétique p. 267.
With modern notation: Stevin starts with a square ABCD representing $x^2$, so its side BC has length $x$. He draws EF parallel to AD, with AE $= 4$. So the rectangle ADFE measures $4x$ and then the rectangle EFCB has area $x^2-4x = 12$. Pick G the midpoint of AE, and draw the square GLKB.
The rest of the argument as it appears in L'Arithmétique:
Half of AE $=4$ which is GE 2
Its square GEHI 4
Add to the same the given ${\bf 0}$, i.e. EFCB 12
Gives sum for the gnomon HIGBCF
or for the square GBKL of same area 16
Its root BK 4
Add to the same GE$=2$ or instead KC = GE, makes for BC 6
Q.E.D. [My translations -TP]
Notice that the argument and even the diagram are inherited directly from al-Khwarizmi.
René Descartes (1596-1650) published La Géometrie (1885 edition in modern French) in 1637. One of the first topics he covers is the solution of quadratic equations. Besides the actual geometry in his book, two notational and conceptual features started the modern era in mathematics. The first was his use of exponents for powers. This started as a typographical convenience: he still usually wrote $xx$ instead of $x^2$. It turned out to be a short series of steps (but one he could not have imagined) from his $x^3$ to Euler's $e^x$. The second was his use of $x$ and $y$ as coordinates in the plane. The first would eventually allow the general quadratic equation to be written as $ax^2 + bx +c =0$, and the second would allow the solution to be viewed as the intersection of a parabola with the $x$-axis. But Descartes' avoidance of negative numbers (the original cartesian plane was a quadrant) kept him from the full exploitation of his own inventions.
In particular, he still had to follow al-Khwarizmi and Stevin in distinguishing three forms of quadratic equations. In his notation:
$z^2=az + b^2$
$y^2=-ay + b^2$
$z^2 = az-b^2$.
His solutions, however, are completely different from the earlier methods. He shows in all three cases how a ruler-and-compass construction can lead from the lengths $a$ and $b$ to a segment whose length is the solution.
Cases 1 and 2. "I construct the right triangle NLM with one leg $LM$ equal to $b$, and the other $LN$ is $\frac{1}{2}a$, half of the other known quantity which was multiplied by $z$, which I suppose to be the unknown length; then extending $MN$, the hypothenuse of this triangle, to the point $O$, such that $NO$ is equal to $NL$, the entirety $OM$ is the sought-for length; and it can be expressed in this way: $$z=\frac{1}{2}a + \sqrt{\frac{1}{4}aa + bb}.$$ Whereas if I have $yy=-ay+bb$, with $y$ the unknown quantity, I construct the same right triangle $NLM$, and from the hypothenuse $MN$ I subtract $NP$ equal to $NL$, and the remainder $PM$ is $y$, the sought-for root. So that I have $y=-\frac{1}{2}a + \sqrt{\frac{1}{4}aa + bb}.$"
Case 3. "Finally, if I have $$z^2=az-bb$$ I draw $NL$ equal to $\frac{1}{2}a$ and $LM$ equal to $b$, as before; then, instead of joining the points $MN$, I draw $MQR$ parallel to $LN$, and having drawn a circle with center $N$ which cuts it at the points $Q$ and $R$, the sought-for length is $MQ$, or $MR$; since in this case it can be expressed two ways, to wit, $z=\frac{1}{2}a + \sqrt{\frac{1}{4}aa – bb}$ and $z=\frac{1}{2}a – \sqrt{\frac{1}{4}aa – bb}.$ And if the circle, which has its center at $N$ and passes through $L$ neither cuts nor touches the straight line $MQR$, the equation has no root, so one can state that the construction of the proposed problem is impossible." [My translation. Checking Descartes' constructions involves some standard Euclidean geometry.-TP]
In Maria Gaetana Agnesi's Instituzioni analitiche ad uso della gioventù italiana
Agnesi's textbook was published in 1748. Agnesi does algebraically complete the square for one case of the quadratic equation:
"Consider the equation $xx+2ax = bb$; add to one and to the other sides the square of half the coefficient of the second term, i.e. $aa$, and it becomes $xx+2ax+aa = aa + bb$ and, extracting the root, $x=\pm\sqrt{aa+bb} -a$."
Specifically (with the letters Descartes used) she takes $a$ as a positive quantity (necessary for the geometric construction) and gives
MP and $-$MO as roots of $xx+ax-bb=0$
MO and $-$MP as roots of $xx-ax-bb=0$
$-$MQ and $-$MR as roots of $xx+ax+bb=0$
MQ and MR as roots of $xx-ax+bb=0$.
Imaginary numbers do not yet have a geometric meaning. "Whenever the equation, to which the particulars of the problems have led us, produces only imaginary values, this means that the problem has no solution, and that it is impossible." [My translations -TP]
Note that she uses the $\pm$ notation. She explains: "So the ambiguity of the sign affected to the square root implies two values for the unknown, which can be both positive, both negative, one positive and the other negative, or even both imaginary, depending on the quantities from which they have been computed."
But when Agnesi comes to a general treatment she follows Descartes, using identical geometric constructions, with one significant improvement, as implied above: negative roots are calculated and given the same status as positive ones:
"Negative values, which are still called false, are no less real than the positive, and have the only difference, that if in the solution to the problem the positives are taken from the fixed starting point of the unknown toward one side, the negatives are taken from the same point in the opposite direction."
Euler's text was published in St. Petersburg in 1770; John Hewitt's English translation, Elements of Algebra, appeared in 1840. Chapter VI covers the general quadratic equation: Euler writes it as $ax^2\pm bx\pm c=0$, and then remarks that it can always be put in the form $x^2 + px = q$, where $p$ and $q$ can be positive or negative. He explains how the left-hand side can be made into the square of $(x+\frac{1}{2}p)$ by adding $\frac{1}{4}p^2$ to both sides, leading to $x+\frac{1}{2}p = \sqrt{\frac{1}{4}p^2 + q}$ and, "as every square root may be taken either affirmitively or negatively," $$x = -\frac{1}{2}p \pm \sqrt{\frac{1}{4}p^2 + q}.$$ In deriving this solution, completely equivalent to the quadratic formula, Euler has completed the square in a purely algebraic manner. The gnomons and Euclidean diagrams, that for some 2500 years had seemed necessary to justify the maneuver, have evaporated.
In Elements of Algebra Euler is receptive to imaginary numbers:
§145. "But notwithstanding [their being impossible] these numbers present themselves to the mind; they exist in our imagination, and we still have a sufficient idea of them; since we know that by $\sqrt{-4}$ is meant a number which, multiplied by itself, produces $-4$; for this reason also, nothing prevents us from making use of these imaginary numbers, and employing them in calculation."
but does not consider them "possible" as roots of quadratic equations. For example:
§700. "A very remarkable case sometimes occurs, in which both values of $x$ become imaginary, or impossible; and it is then wholly impossible to assign any value for $x$, that would satisfy the terms of the equation."
A Gaussian function $f(x)=C\exp(-\frac{1}{2}ax^2)$ corresponds to a familiar "bell-shaped curve." In multivariable calculus we learn that $\int_{-\infty}^{\infty}\,f(x)\,dx=C\sqrt{\frac{2\pi}{a}}$; this also holds for $\int_{-\infty}^{\infty}\,f(x-\mu)\,dx$, with $\mu$ any finite number.
The width of the Gaussian $f(x)=C\exp(-\frac{1}{2}ax^2)$, defined as the distance between the two points where $\,f=\frac{C}{2}$, can be calculated to be $2\sqrt{\frac{2\ln 2}{a}}$. In this image with $C=4$, $a=\frac{1}{2}$, the width is 3.33. Note that the width does not depend on the factor $C$.
The Fourier transform of $f$ (taking $C=1$) is the function $${\hat f}(y)= \int_{-\infty}^{\infty}\exp(ixy)\,f(x)~dx = \int_{-\infty}^{\infty}\exp(-\frac{1}{2}ax^2 + ixy)~dx.$$ This integral can be computed by completing the square: write $-\frac{1}{2}ax^2 + ixy$ as $-\frac{1}{2}a(x^2 -\frac{2iyx}{a} +\frac{y^2}{a^2}-\frac{y^2}{a^2})= -\frac{1}{2}a (x-\frac{iy}{a})^2 – \frac{y^2}{2a}$. Then $$ {\hat f}(y)= \int_{-\infty}^{\infty}\exp(-\frac{y^2}{2a})\,\exp\left (-\frac{1}{2}a(x-\frac{iy}{a})^2\right )\,dx=\sqrt{\frac{2\pi}{a}}\,\exp(-\frac{y^2}{2a}).$$ This means that the Fourier transform of $f$ is again a Gaussian; the parameter $a$ has become $\frac{1}{a}$, so the product of the widths of $\,f$ and its Fourier transform ${\,\hat f}$ is constant. The wider $\,f$ is, the narrower ${\,\hat f}$ must be, and vice-versa. This phenomenon is the mathematical form of the uncertainty principle.
Pooling strategies for COVID-19 testing
On: October 1, 2020
In: 2020, David Austin
How could 10 tests yield accurate results for 100 patients?
While the COVID-19 pandemic has brought tragedy and disruption, it has also provided a unique opportunity for mathematics to play an important and visible role in addressing a pressing issue facing our society.
By now, it's well understood that testing is a crucial component of any effective strategy to control the spread of the SARS-CoV-2 coronavirus. Countries that have developed effective testing regimens have been able, to a greater degree, to resume normal activities, while those with inadequate testing have seen the coronavirus persist at dangerously high levels.
Developing an effective testing strategy means confronting some important challenges. One is producing and administering enough tests to form an accurate picture of the current state of the virus' spread. This means having an adequate number of trained health professionals to collect and process samples along with an adequate supply of reagents and testing machines. Furthermore, results must be available promptly. A person is who unknowingly infected can transmit the virus to many others in a week, so results need to be available in a period of days or even hours.
One way to address these challenges of limited resources and limited time is to combine samples from many patients into testing pools strategically rather than testing samples from each individual patient separately. Indeed, some well-designed pooling strategies can decrease the number of required tests by a factor of ten; that is, it is possible to effectively test, say, 100 patients with a total of 10 tests.
On first thought, it may seem like we're getting something from nothing. How could 10 tests yield accurate results for 100 patients? This column will describe how some mathematical ideas from compressed sensing theory provide the key.
One of the interesting features of the COVID-19 pandemic is the rate at which we are learning about it. The public is seeing science done in public view and in real time, and new findings sometimes cause us to amend earlier recommendations and behaviors. This has made the job of communicating scientific findings especially tricky. So while some of what's in this article may require updating in the near future, our aim is rather to focus on mathematical issues that should remain relevant and trust the reader to update as appropriate.
Some simple pooling strategies
While the SARS-CoV-2 virus is new, the problem of testing individuals in a large population is not. Our story begins in 1943 when Robert Dorfman proposed the following simple method for identifying syphilitic men called up for induction through the war time draft.
A 1941 poster encourages syphilis testing (Library of Congress)
Suppose we have samples from, say, 100 patients. Rather than testing each of the samples individually, Dorfman suggested grouping them into 10 pools of 10 each and testing each pool.
If the test result of a pool is negative, we conclude that everyone in that pool is free of infection. If a pool tests positively, then we test each individual in the pool.
In the situation illustrated above, two of the 100 samples are infected, so we perform a total of 30 tests to identify them: 10 tests for the original 10 pools followed by 20 tests for each member of the two infected pools. Here, the number of tests performed is 30% of the number required had we tested each individual separately.
Of course, there are situations where this strategy is disadvantageous. If there is an infected person in every pool, we end up performing the original 10 tests and follow up by then testing each individual. This means we perform 110 tests, more than if we had just tested everyone separately.
What's important is the prevalence $p$ of the infection, the expected fraction of infected individuals we expect to find or the probability that a random individual is infected. If the prevalence is low, it seems reasonable that Dorfman's strategy can lead to a reduction in the number of tests we expect to perform. As the prevalence grows, however, it may no longer be effective.
It's not too hard to find how the expected number of tests per person varies with the prevalence. If we arrange $k^2$ samples into $k$ pools of $k$ samples each, then the expected number of tests per person is $$ E_k = \frac1k + (1-(1-p)^k). $$ When $k=10$, the expected number $E_{10}$ is shown on the right. Of course, testing each individual separately means we use one test per person so Dorfman's strategy loses its advantage when $E_k\geq 1$. As the graph shows, when $p>0.2$, meaning there are more than 20 infections per 100 people, we are better off testing each individual.
Fortunately, the prevalence of SARS-CoV-2 infections is relatively low in the general population. As the fall semester began, my university initiated a randomized testing program that showed the prevalence in the campus community to be around $p\approx 0.01$. Concern is setting in now that that number is closer to 0.04. In any case, we will assume that the prevalence of infected individuals in our population is low enough to make pooling a viable strategy.
Of course, no test is perfect. It's possible, for instance, that an infected sample will yield a negative test result. It's typical to characterize the efficacy of a particular testing protocol using two measures: sensitivity and specificity. The sensitivity measures the probability that a test returns a positive result for an infected sample. Similarly, the specificity measures the probability that a test returns a negative result when the sample is not infected. Ideally, both of these numbers are near 100%.
Using Dorfman's pooling method, the price we pay for lowering the expected number of tests below one is a decrease in sensitivity. Identifying an infected sample in this two-step process requires the test to correctly return a positive result both times we test it. Therefore, if $S_e$ is the sensitivity of a single test, Dorfman's method has a sensitivity of $S_e^2$. For example, a test with a sensitivity of 95% yields a sensitivity around 90% when tests are pooled.
There is, however, an increase in the specificity. If a sample is not infected, testing a second time increases the chance that we detect it as such. One can show that if the sensitivity and specificity are around 95% and the prevalence at 1%, then pooling 10 samples, as shown above, raises the specificity to around 99%.
Some modifications of Dorfman's method
It's possible to imagine modifying Dorfman's method in several ways. For instance, once we have identified the infected pools in the first round of tests, we could apply a pooling strategy on the smaller set of samples that still require testing.
A second possibility is illustrated below where 100 samples are imagined in a square $10\times10$ array. Each sample is included in two pools according to its row and column so that a total of 20 tests are performed in the first round. In the illustration, the two infected samples lead to positive results in four of the pools, two rows and two columns.
We know that the infected samples appear at the intersection of these two rows and two columns, which leads to a total of four tests in the second round. Once again, it's possible to express $E_k$, the number of expected tests per individual in terms of the prevalence $p$. If we have $k^2$ tests arranged in a $k\times k$ array, we see that $$ E_k =\frac2k + p + (1-p)(1-(1-p)^{k-1}, $$ if we assume that the sensitivity and specificity are 100%.
The graph at right shows the expected number of tests using the two-dimensional array, assuming $k=10$, in red with the result using Dorfman's original method in blue. As can be seen, the expected number of tests is greater using the two-dimensional approach since we invest twice as many tests in the first round of testing. However each sample is included in two tests in the initial round. For an infected sample to be misidentified, both tests would have to return negative results. This means that the two-dimensional approach is desirable because the sensitivity of this strategy is greater than the sensitivity of the individual tests and we still lower the expected number of tests when the prevalence is low.
While it is important to consider the impact that any pooling strategy has on these important measures, our focus will, for the most part, take us away from discussions of specificity and sensitivity. See this recent Feature column for a deep dive into their relevance.
Theoretical limits
There has been some theoretical work on the range of prevalence values over which pooling strategies are advantageous. In the language of information theory, we can consider a sequence of test results as an information source having entropy $I(p) = -p\log_2(p) – (1-p)\log_2(1-p)$. In this framework, a pooling strategy can be seen as an encoding of the source to effectively compress the information generated.
Sobel and Groll showed that $E$, the expected number of tests per person, for any effective pooling method must satisfy $E \geq I(p)$. On the right is shown this theoretical limit in red along with the expected number of tests under the Dorfman method with $k=10$.
Further work by Ungar showed that when the prevalence grows above the threshold $p\geq (3-\sqrt{5})/2 \approx 38\%$, then we cannot find a pooling strategy that is better than simply testing everyone individually.
RT-qPCR testing
While there are several different tests for the SARS-CoV-2 virus, at this time, the RT-qPCR test is considered the "gold standard." In addition to its intrinsic interest, learning how this test works will help us understand the pooling strategies we will consider next.
A sample is collected from a patient, often through a nasal swab, and dispersed in a liquid medium. The test begins by converting the virus' RNA molecules into complementary DNA through a process known as reverse transcription (RT). A sequence of amplification cycles, known as quantitative polymerase chain reaction (qPCR) then begins. Each cycle consists of three steps:
The liquid is heated close to boiling so that the transcribed DNA denatures into two separate strands.
Next the liquid is cooled so that a primer, which has been added to the liquid, attaches to a DNA strand along a specific sequence of 100-200 nucleotides. This sequence characterizes the complementary DNA of the SARS-CoV-2 virus and is long enough to uniquely identify it. This guarantees that the test has a high sensitivity. Attached to the primer is a fluorescent marker.
In a final warming phase, additional nucleotides attach to the primer to complete a copy of the complementary DNA molecule.
Through one of these cycles, the number of DNA molecules is essentially doubled.
The RT-qPCR test takes the sample through 30-40 amplification cycles resulting in a significant increase in the number of DNA molecules, each of which has an attached fluorescent marker. After each cycle, we can measure the amount of fluorescence and translate it into a measure of the number of DNA molecules that have originated from the virus.
The fluorescence, as it depends on the number of cycles, is shown below. A sample with a relatively high viral load will show significant fluorescence at an early cycle. The curves below represent different samples and show how the measured fluorescence grows through the amplification cycles. Moving to the right, each curve is associated with a ten-fold decrease in the initial viral load of the sample. Taken together, these curves represent a range of a million-fold decrease in the viral load. In fact, the test is sensitive enough to detect a mere 10 virus particles in a sample.
The FDA has established a threshold, shown as the red horizontal line, above which we can conclude that the SARS-CoV-2 virus is present in the original sample. However, the test provides more than a binary positive/negative result; by matching the fluorescence curve from a particular sample to the curves above, we can infer the viral load present in the original sample. In this way, the test provides a quantitative measure of the viral load that we will soon use in developing a pooling method.
Pooling samples from several individuals, only one of whom is infected, will dilute the infected sample. The effect is simply that the fluorescence response crosses the FDA threshold in a later cycle. There is a limit, however. Because noise can creep into the fluorescence readings around cycle 40, FDA standards state that only results from the first 39 cycles are valid.
Recent studies by Bilder and Yelin et al investigated the practical limits of pooling samples in the RT-qPCR test and found that a single infected sample can be reliably detected in a pool of up to 32. (A recent study by the CDC, however, raises concerns about detecting the virus using the RT-qPCR test past the 33rd amplification cycle. )
Non-adaptive testing strategies
Dorfman's pooling method and its variants described above are known as adaptive methods because they begin with an initial round of tests and use those results to determine how to proceed with a second round. Since the RT-qPCR test requires 3 – 4 hours to complete, the second round of testing causes a delay in obtaining results and ties up testing facilities and personnel. A non-adaptive method, one that produces results for a group of individuals in a single round of tests, would be preferable.
Several non-adaptive methods have recently been proposed and are even now in use, such as P-BEST. The mathematical ideas underlying these various methods are quite similar. We will focus on one called Tapestry.
We first collect samples from $N$ individuals and denote the viral loads of each sample by $x_j$. We then form these samples into $T$ pools in a manner to be explained a little later. This leads to a pooling matrix $A_i^j$ where $A_i^j = 1$ if the sample from individual $j$ is present in the $i^{th}$ test and 0 otherwise. The total viral load in the $i^{th}$ test is then $$ y_i = \sum_{j} A_i^j x_j, $$ which can be measured by the RT-qPCR test. In practice, there will be some uncertainty in measuring $y_i$, but it can be dealt with in the theoretical framework we are describing.
Now we have a linear algebra problem. We can express the $T$ equations that result from each test as $$ \yvec = A\xvec, $$ where $\yvec$ is the known vector of test results, $A$ is the $T\times N$ pooling matrix, and $\xvec$ is the unknown vector of viral loads obtained from the patients.
Because $T\lt N$, this is an under-determined system of equations, which means that we cannot generally expect to solve for the vector $\xvec$. However, we have some additional information: because we are assuming that the prevalence $p$ is low, the vector $\xvec$ will be sparse, which means that most of its entries are zero. This is the key observation on which all existing non-adaptive pooling methods rely.
It turns out that this problem has been extensively studied within the area of compressed sensing, a collection of techniques in signal processing that allow one to reconstruct a sparse signal from a small number of observations. Here is an outline of some important ideas.
First, we will have occassion to consider a couple of different measures of the size of a vector.
First, $\norm{\zvec}{0}$ is the number of nonzero entries in the vector $\zvec$. Because the prevalence of SARS-CoV-2 positive samples is expected to be small, we are looking for a solution to the equation $\yvec=A\xvec$ where $\norm{\xvec}{0}$ is small.
The 1-norm is $$ \norm{\zvec}{1} = \sum_j~|z_j|, $$
and the 2-norm is the usual Euclidean length: $$ \norm{\zvec}{2} = \sqrt{\sum_j z_j^2} $$
Remember that an isometry is a linear transformation that preserves the length of vectors. With the usual Euclidean length of a vector $\zvec$ written as $||\zvec||_2$, then the matrix $M$ defines an isometry if $||M\zvec||_2 = ||\zvec||_2$ for all vectors $\zvec$. The columns of such a matrix form an orthonormal set.
We will construct our pooling matrix $A$ so that it satisfies a restricted isometry property (RIP), which essentially means that small subsets of the columns of $A$ are almost orthonormal. More specifically, if $R$ is a subset of $\{1,2,\ldots, N\}$, we denote by $A^R$ the matrix formed by pulling out the columns of $A$ labelled by $R$; for instance, $A^{\{2,5\}}$ is the matrix formed from the second and fifth columns of $A$. For a positive integer $S$, we define a constant $\delta_S$ such that $$ (1-\delta_S)||\xvec||_2 \leq ||A^R\xvec||_2 \leq (1+\delta_S)||\xvec||_2 $$ for any set $R$ whose cardinality is no more than $S$. If $\delta_S = 0$, then the matrices $A^R$ are isometries, which would imply that the columns of $A^R$ are orthonormal. More generally, the idea is that when $\delta_S$ is small, then the columns of $A^R$ are close to being orthonormal.
Let's see how we can use these constants $\delta_S$.
Because we are assuming that the prevalence $p$ is low, we know that $\xvec$, the vector of viral loads, is sparse. We will show that a sufficiently sparse solution to $\yvec = A\xvec$ is unique.
For instance, suppose that $\delta_{2S} \lt 1$, that $\xvec_1$ and $\xvec_2$ are two sparse solutions to the equation $\yvec = A\xvec$, and that $\norm{\xvec_1}{0}, \norm{\xvec_1}{0} \leq S$. The last condition means that $\xvec_1$ and $\xvec_2$ are sparse in the sense that they have fewer than $S$ nonzero components.
Now it follows that $A\xvec_1 = A\xvec_2 = \yvec$ so that $A(\xvec_1-\xvec_2) = 0$. In fact, if $R$ consists of the indices for which the components of $\xvec_1-\xvec_2$ are nonzero, then $A^R(\xvec_1-\xvec_2) = 0$.
But we know that the cardinality of $R$ equals $\norm{\xvec_1-\xvec_2}{0} \leq 2S$, which tells us that $$ 0 = ||A^R(\xvec_1-\xvec_2)||_2 \geq (1-\delta_{2S})||\xvec_1-\xvec_2||_2. $$ Because $\delta_{2S}\lt 1$, we know that that $\xvec_1 – \xvec = 0$ or $\xvec_1 = \xvec_2$.
Therefore, if $\delta_{2S} \lt 1$, any solution to $\yvec=A\xvec$ with $\norm{\xvec}{0} \leq S$ is unique; that is, any sufficiently sparse solution is unique.
Now that we have seen a condition that implies that sparse solutions are unique, we need to explain how we can find sparse solutions. Candès and Tao show, assuming $\delta_S + \delta_{2S} + \delta_{3S} \lt 1$, how we can find a sparse solution to $\yvec = A\xvec$ with $\norm{\xvec}{0} \leq S$ by minimizing: $$ \min \norm{\xvec}{1} ~~~\text{subject to}~~~ \yvec = A\xvec. $$ This is a convex optimization problem, and there are standard techniques for finding the minimum.
Why is it reasonable to think that minimizing $\norm{x}{1}$ will lead to a sparse solution? Let's think visually about the case where $\xvec$ is a 2-dimensional vector. The set of all $\xvec$ satisfying $\norm{\xvec}{1} = |x_1| + |x_2| \leq C$ for some constant $C$ is the shaded set below:
Notice that the corners of this set fall on the coordinate axes where some of the components are zero. If we now consider solutions to $\yvec=A\xvec$, seen as the line below, we see that the solutions where $\norm{\xvec}{1}$ is minimal fall on the coordinates axes. This forces some of the components of $\xvec$ to be zero and results in a sparse solution.
This technique is related to one called the lasso (least absolute shrinkage and selection operator), which is well known in data science where it is used to eliminate unnecessary features from a data set.
All that remains is for us to find a pooling matrix $A$ that satisfies $\delta_{S} + \delta_{2S} + \delta_{3S} \lt 1$ for some $S$ large enough to find vectors $\xvec$ whose sparsity $\norm{\xvec}{0}$ is consistent with the expected prevalence. There are several ways to do this. Indeed, a matrix chosen at random will work with high probability, but the application to pooling SARS-CoV-2 samples that we have in mind leads us to ask that $A$ satisfy some additional properties.
The Tapestry method uses a pooling matrix $A$ formed from a Steiner triple system, an object studied in combinatorial design theory. For instance, one of Tapestry's pooling matrices is shown below, where red represents a 1 and white a 0.
This is a $16\times40$ matrix, which means that we perform 16 tests on 40 individuals. Notice that each individual's sample appears in 3 tests. This is a relatively low number, which means that the viral load in a sample is not divided too much and that, in the laboratory, time spent pipetting the samples is minimzed. Each test consists of samples from about eight patients, well below the maximum of 32 needed for reliable RT-qPCR readings.
It is also important to note that two samples appear together in at most one test. Therefore, if $A^j$ is the $j^{th}$ column of $A$, it follows that the dot product $A^j\cdot A^k = 0$ or $1$. This means that two columns are either orthogonal or span an angle of $\arccos(1/3) \approx 70^\circ$. If we scale $A$ by $1/\sqrt{3}$, we therefore obtain a matrix whose columns are almost orthonormal and from which we can derive the required condition, $\delta_S + \delta_{2S} + \delta_{3S} \lt 1$ for some sufficiently large value of $S$.
There is an additional simplification we can apply. For instance, if we have a sample $x_j$ that produces a negative result $y_i=0$ in at least one test in which it is included, then we can conclude that $x_j = 0$. This means that we can remove the component $x_j$ from the vector $\xvec$ and the column $A^j$ from the matrix $A$. Removing all these sure negatives often leads to a dramatic simplication in the convex optimization problem.
Tapestry has created a variety of pooling matrices that can be deployed across a range of prevalences. For instance, a $45\times 105$ pooling matrix, which means we perform 45 tests on 105 individuals, is appropriate when the prevalence is roughly 10%, a relatively high prevalence.
However, there is also a $93\times 961$ pooling matrix that is appropriate for use when the prevalence is around 1%. Here we perform 93 tests on 961 patients in pools of size 32, which means we can test about 10 times the number of patients with a given number of tests. This is a dramatic improvement over performing single tests on individual samples.
If the prevalence turns out to be too high for the pooling matrix used, the Tapestry algorithm detects it and fails gracefully.
Along with non-adaptive methods comes an increase in the complexity of their implementation. This is especially concerning since reliability and speed are crucial. For this reason, the Tapestry team built an Android app that guides a laboratory technician though the pipetting process, receives the test results $y_i$, and solves for the resulting viral loads $x_j$ returning a list of positive samples.
Using both simulated and actual lab data, the authors of Tapestry studied the sensitivity and specificity of their algorithm and found that it performs well. They also compared the number of tests Tapestry performs with Dorfman's adaptive method and found that Tapestry requires many fewer tests, often several times fewer, in addition to finishing in a single round.
As we've seen here, non-adaptive pooling provides a significant opportunity to improve our testing capacity by increasing the number of samples we can test, decreasing the amount of time it takes to obtain results, and decreasing the costs of testing. These improvements can play an important role in a wider effort to test, trace, and isolate infected patients and hence control the spread of the coronavirus.
In addition, the FDA recently gave emergency use authorization for the use of these ideas. Not only is there a practical framework for deploying the Tapestry method, made possible by their Android app, it's now legal to do so.
Interestingly, the mathematics used here already existed before the COVID-19 pandemic. Dating back to Dorfman's original work of 1943, group pooling strategies have continued to evolve over the years. Indeed, the team of Shental et al. introduced P-BEST, their SARS-CoV-2 pooling strategy, as an extension of their earlier work to detect rare alleles associated to some diseases.
Mike Breen, recently of the AMS Public Awareness Office, oversaw the publication of the monthly Feature Column for many years. Mike retired in August 2020, and I'd like to thank him for his leadership, good judgment, and never-failing humor.
David Donoho. A Mathematical Data Scientist's perspective on Covid-19 Testing Scale-up. SIAM Mathematics of Data Science Distinguished Lecture Series. June 29, 2020.
Donoho's talk is a wonderful introduction to and overview of the problem, several approaches to it, and the workings of the scientific community.
Claudio M. Verdun et al. Group testing for SARS-CoV-2 allows for up to 10-fold efficiency increase across realistic scenarios and testing strategies.
This survey article provides a good overview of Dorfman's method and similar techniques.
Chris Bilder. Group Testing Research.
Bilder was prolific in the area of group testing before (and after) the COVID-19 pandemic, and this page has many good resources, including this page of Shiny apps.
Idan Yelin et al. Evaluation of COVID-19 RT-qPCR Test in Multi sample Pools. Clinical Infectious Diseases, May 2020.
Milton Sobel and Phyllis A. Groll. Group testing to eliminate efficiently all defectives in a binomial sample. Bell System Technical Journal, Vol. 38, Issue 5, Sep 1959. Pages 1179–1252.
Peter Ungar. The cutoff point for group testing. Communications on Pure and Applied Mathematics, Vol. 13, Issue 1, Feb 1960. Pages 49-54.
Tapestry Pooling home page.
Ghosh et al. Tapestry: A Single-Round Smart Pooling Technique for COVID-19 Testing
This paper and the next outline the Tapestry method.
Ghosh et al. A Compressed Sensing Approach to Group-testing for COVID-19 Detection.
N. Shental et al. Efficient high-throughput SARS-CoV-2 testing to detect asymptomatic carriers.
This article and the next describe the P-BEST technique.
Noam Shental, Amnon Amir, and Or Zuk. Identification of rare alleles and their carriers using compressed se(que)nsing. Nucleic Acids Research, 2010, Vol. 38, No. 19.
David L. Donoho. Compressed Sensing. IEEE Transactions on Information Theory. Vol. 52, No. 4, April 2006.
Emamnuel J. Candès. Compressive sampling. Proceedings of the international congress of mathematicians. Vol. 2, 2006. Pages 1433-1452.
Emmanuel Candès, Justin Romberg, and Terence Tao. Stable Signal Recovery from Incomplete and Inaccurate Measurements. Communications in Pure and Applied Mathematics. Vol. 59, Issue 8, August 2006. Pages 1207-1223.
Emmanuel Candès and Terence Tao. Decoding by Linear Programming. IEEE Transactions on Information Theory. Vol. 51, Issue 12, Dec. 2005. Pages 4203-4215.
Chun Lam Chan, Pak Hou Che and Sidharth Jaggi. Non-adaptive probabilistic group testing with noisy measurements: Near-optimal bounds with efficient algorithms. 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton). Monticello, IL, 2011. Pages 1832-1839.
Steiner system. Wikipedia page.
Does He Have It?
In: 2020, Bill Casselman
Sensitivity, Specificity, and COVID-19 Testing
As every doctor should know, medical tests are rarely 100% accurate. Interpreting them is a matter of probability. And probability, of course, is a matter of mathematics…
Bill Casselman
University of British Columbia, Vancouver, Canada
Well, make up your mind. Does he have it or not?
I remarked in an earlier column that there were two major ways in which mathematics has contributed to our understanding of the disease COVID-19 and the coronavirus SARS-CoV-2 that causes it. One was by modeling the development of the epidemic in order to predict possible outcomes, and the other was by uncovering the sources of particular outbreaks by tracking the evolution of genomes. This column will be concerned with a third way in which mathematics helps to control the epidemic, and that is by interpreting procedures that test for the presence of the disease.
The basic problem was illustrated dramatically at the beginning of August. The President of the United States was about to visit the state of Ohio, and naturally it was expected that he and the governor of Ohio (Mike DeWine, a Republican) would meet. As a matter of routine, the governor was tested for coronavirus before the meeting, and the test returned a positive result. It was a definite possibility, even likely, that DeWine was afflicted with Covid-19. He put himself into quarantine immediately, and of course the meeting with the President had to be called off. A short time later a second, presumably more accurate, test was given and this time it came back negative. This second result was later confirmed by third and fourth tests. (You can read more about this incident in an article in The New York Times, or also another NYT article.)
It was suggested in the media (the Washington Post, among others) that this sequence of events would reduce the trust of the population at large in the validity of such tests, and—very, very unfortunately—discourage many from having them. The governor himself, one of the state governors to take the epidemic extremely seriously, stated very clearly that in his opinion avoiding tests was not at all a good idea. Medical experts agree—testing is in fact a crucial tool in controlling the epidemic, and the United States is almost certainly not doing enough of it. At the very least it is doing it awkwardly. It is therefore important to try to understand better what is going on.
The basic fact is very simple: as every doctor should know, medical tests are rarely 100% accurate. Interpreting them is a matter of probability. And probability, of course, is a matter of mathematics.
COVID-19 test image from Wikimedia Commons.
Measuring a test's accuracy
Many medical tests are abstractly similar: they give a yes/no answer to a question about the status of a patient. In the case of the governor of Ohio, the question is, "Does this person have COVID-19?" His first test answered "yes", while the later three answered "no". Clearly, they were not all correct. The tests are certainly inaccurate a certain percentage of the time.
How is the accuracy of such tests measured? There is no single number that does this. Instead, there are two: sensitivity and specificity. To quote one instructive web page, sensitivity and specificity "are the yin and yang of the testing world and convey critical information about what a test can and cannot tell us. Both are needed to fully understand a test's strengths as well as its shortcomings."
Sensitivity characterizes how a test deals with people who are infected. It is the percentage of those who test as infected when that they are in fact infected. A test with high sensitivity will catch almost every infected person, and will hence allow few of those infected to escape detection. This is the most important criterion. For example, it was reported recently that Spanish health authorities returned thousands of tests to one firm after finding that its tests had sensitivity equal to 30%. This means that 70% of infected people would not be diagnosed correctly by this test.
Specificity characterizes how a test deals with people who are not infected. It is the percentage of those who will test as uninfected when they are in fact uninfected. A test with high specificity will hence report a small number of false positives.
For detecting currently infected people, high sensitivity is important, because infected people who avoid detection (false negatives) are potential sources of further infection. High specificity is not so crucial. Tagging uninfected as infected (making false positives) can be unfortunate, but not with possibly fatal consequences.
It might not be immediately apparent, but sensitivity and specificity are independent features. For example, it could happen that a test lazily returns a "yes" no matter what the condition of the patient happens to be. In this case, it will have sensitivity 100%, but specificity 0%. Or it could behave equally lazily, but in the opposite way, and return a "no" in all cases: sensitivity 0% and specificity 100%. Both of these, would be extremely poor tests, but these simple examples demonstrate that the two measures are in some way complementary. (Hence the yin and yang.) We shall see later on another illustration of complementarity.
For COVID-19, the most commonly used tests are of two very different kinds – one attempts to tell whether the tested patient is currently infected, while the second detects antibodies against COVID-19. Tests in this second group do not detect whether the patient is currently infected, but only whether or not the patient has been infected in the past.
In the first group of tests, which detect current infection, there is again a division into two types. One, called a PCR test, looks for genetic fragments of the virus. The other tries to find bits of the virus' protein component. This last is called an antigen test, because it is this protein component that a body's immune system interacts with. (An antigen, according to Wikipedia, is a foreign substance invading the body and stimulating a response of the immune system.) Tests of the first type are generally more accurate than those of the second, but tests of the second type produce results much more rapidly – in minutes rather than hours or even days.
None of these tests is guaranteed to return correct results.
The first test taken by Governor DeWine was an antigen test, and his later tests were all PCR. In all these tests, samples were taken by nasal swabs. The antigen test was relatively new, and produced by the company Quidel. Initial versions were claimed to possess a sensitivity of around 80%, but more recent ones are claimed to have about 97% sensitivity, which is certainly comparable with PCR tests. They also claimed from the start a specificity of 100%.
The outcome of tests
What do these numbers mean? Given that Governor DeWine tested positive, what is the probability that he is infected with COVID-19?
One way to figure this out is to run some mental experiments. Suppose we give the tests to 1000 people chosen at random from a US population. What happens? It turns out that the answer to this depends on the infection level of the population, as we shall see. In other words, in order to carry out this experiment, I have to assume something about this level. Like many other things about COVID-19, this statistic is not known accurately, and in any case varies widely from place to place—perhaps, at this stage of the epidemic, a plausible range is between 1% and 15%. I'll try various guesses, using the values of sensitivity and specificity given by Quidel.
1. Suppose at first 5% of the population to be infected.
In the sample of 1000, there will be around 50 who are currently infected.
Because the sensitivity of the test is 97, of these about 48 will be labeled as positive, and the remaining 2 will not be correctly detected.
But there remain 950 people in the sample who are not infected. For these it is the specificity that matters. I have assumed this to be 100%, which means that 100% of those who are not infected will be labeled as uninfected, and 0% will be found infected.
Therefore, with these values of sensitivity and specificity, a nominal 48 will be found to be infected, and $850 + 2$ uninfected.
Wait a minute! Governor DeWine turned out ultimately not to be infected, but he was nonetheless found to be infected by the test. This contradicts the claim of 100% specificity! Something has gone wrong.
It is important to realize that the assigned values of sensitivity and specificity are somewhat theoretical, valid only if the test is applied in ideal conditions. Often in the literature these are called clinical data. But in the real world there are no ideal conditions, and the clinical specifications are not necessarily correct. Many errors are related to the initial sample taking, and indeed it is easy enough to imagine how a swab could miss crucial material. However, it is hard to see how an error could turn a sample without any indication of COVID-19 to one with such an indication. In any case, following the inevitable Murphy's Law, errors will certainly degrade performance.
2. This suggests trying out some slightly less favorable values for sensitivity and specificity, say 90% sensitivity and 95% specificity, but with the same 5% infection rate.
In this case, 45 out of the true 50 infected will be caught, and 5 who will not be tagged.
There will still be 950 who are not infected, but 5% = (100 – 95)% of these, i.e. about 48, will return positive.
That makes another 48, and a total of 93 positive test results.
In this experiment, Governor DeWine is one of 93, of whom 45 are infected, 48 not. Given the data in hand, it makes sense to say that the probability he is one of the infected is 45/93 = 0.49 or 49%. Definitely not to be ignored.
3. We might also try different guesses of the proportion of infection at large. Suppose it to be as low as 1%.
Then of our 1000, 10 will be infected. Of these, 95% = 9 will test positive.
In addition, there will be 990 who are not infected, and 5% or about 49 of these will test as positive, making a total of 58.
Now the probability that the Governor is infected is 9/58 = 15%, much lower than before. So in this case, when the proportion of the overall population who are infected is rather small, the test is swamped by false positives.
4. Suppose the proportion of infected at large to be as high as 20%.
Then of our 1000, 200 will be infected. Of these, 95% = 180 will test positive.
In addition, there will be 800 who are not infected, and 5% or about 40 of these will test as positive, making a total of 220.
Now the probability that the Governor is infected would be 180/220 = 82%, much higher than before. Looking at these three examples it seems that the test becomes more accurate as the overall probability of infection increases.
5. To get a better idea of how things go without rehearsing the same argument over and over, we need a formula. We can follow the reasoning in these calculations to find it. Let $a$ be the sensitivity (with $0 \le a \le 1$ instead of being measured in a percentage), and let $b$ be the specificity. Suppose the test is given to $N$ people, and suppose $P$ of them to be infected.
Then $aP$ of these will be infected and test positive.
Similarly, $(1-a)P$ will be infected but test negative.
There are $N – P$ who are not infected, and of these the tests of $b(N-P)$ will return negative.
That also means that the remainder of the $N-P$ uninfected people, or $(1-b)(N-P)$, will test positive (these are the false positives).
That makes $aP + (1-b)(N-P)$ in total who test positive. Of these, the fraction who are infected, which we can interpret as the probability that a given person who tests positive is actually one of the infected is $$ { aP \over aP + (1 – b)(N-P) } $$
We can express this formula in terms of probabilities instead of population sizes. The ratio $p = P/N$ is the proportion of infected in the general population. The ratio $q = (N-P)/N$ is the proportion of uninfected. Then the ratio in the formula above is $$ { ap \over ap + (1-b)q } $$
The advantage of the formula is that we don't have to consider each new value of $P/N$ on its own. Given $a$ and $b$, we can graph the formula as a function of the proportion $p = P/N$ of infected. For $a = 0.90$, $b = 0.95$ we get:
A good test is one for which the number of those who test positive is close to the number of those who are infected. The graph shows that this fails when the proportion of infected in the population at large is small. As I said earlier, the test is swamped by false positives.
Likewise, the proportion of infected who test negative (the potentially dangerous false negatives) is
$$ { (1-a)p \over (1 – a)p + bq } $$
And in a graph:
This time, the test is better if the graph lies low—if the number of infected who escape is small. The two graphs again illustrate that sensitivity and specificity are complementary. As the proportion infected in the population at large goes up, one part of the test improves and the other degrades.
The moral is this: sensitivity and specificity are intrinsic features of the tests, but interpreting them depends strongly on the environment.
Repeating the tests
As the case of Governor DeWine showed, the way to get around unsatisfactory test results is to test again.
To explain succinctly how this works, I first formulate things in a slightly abstract manner. Suppose we apply a test with sensitivity $a$ and specificity $b$ to a population with a proportion of $p$ infected and $q = 1-p$ uninfected. We can interpret these as probabilities: $p$ is the probability in the population at large of being infected, $q$ that of not being infected.
The test has the effect of dividing the population into two groups, those who test positive and those who test negative. Since the tests are not perfect, each of those again should be again partitioned into two smaller groups, the infected and the uninfected. Those who test positive have [infected:uninfected[] equal to $[ap : (1-b)q]$.
Those who test negative have [infected: uninfected] equal to $[(1-a)p:bq]$.
So the effect of the test is that we have divided the original population into two new populations, each of which also has both infected and infected. For those who tested positive, the new values of $p$ and $q$ are
$$ p_{+} = { ap \over ap + (1-b)q }, \quad q_{+} = { (1-b)q \over ap + (1-b)q } , $$
while for those who tested negative they are
$$ p_{-} = { (1-a)p \over (1-a)p + bq }, \quad q_{-} = { bq \over (1-a)p + bq }. $$
These are the new infection data we feed into any subsequent test, possibly with new $a$, $b$.
Now we can interpret Governor DeWine's experience. The first test was with the Quidel antigen test, say with $a = 0.90$ and $b = 0.95$. Suppose $p = 0.05$. Since the test was positive, the probability of the Governor being infected is therefore 0.49. But now he (and, in our imaginations, his cohort of those who tests were positive) is tested again, this time by the more accurate PCR test, with approximate accuracy $a = 0.80$ and $b = 0.99$. This produces a new probability $p = 0.16$ that he is infected, still not so good. But we repeat, getting successive values of $0.037$ and then $ 0.008$ that he is infected. The numbers are getting smaller, and the trend is plain. Governor DeWine has escaped (this time).
Estimating the number of infected
In effect, the test is a filter, dividing the original population into two smaller ones. One is an approximation to the set of infected, the other an approximation to the set of uninfected.
A second use of these tests is to try to figure out what proportion of the population is in fact infected. This is especially important with COVID-19, because many of the cases show no symptoms at all. The basic idea is pretty simple, and can be best explained by an example.
In fact, let's go back to an earlier example, with $N = 1000$, sensitivity $0.90$, specificity $0.95$, 150 infected. Of these, 177 test positive. We run the same test on these, and get 123 testing positive. Repeat: we get 109 positives.
What can we do with these numbers? The positives are comprised of true positives and false positives. The number of true positives is cut down by a factor of $0.90$ in each test, so inevitably the number of true positives is going to decrease as tests go on. But they decrease by a known ratio $a$! After one test, by a ratio $0.90$, after two tests, a ratio $0.90^{2} = 0.81$, after three by $0.90^{3} = 0.729$. Taking this into account, in order to find an approximation to the original number of infected we divide the number of positives by this ratio. We get the following table:
$$ \matrix { \hbox{ test number } & \hbox{ number of positives $N_{+}$ } & \hbox{ ratio $r$ } & N_{+}/r \cr 1 & 177 & 0.9 & 197 \cr 2 & 124 & 0.81 & 152 \cr 3 & 109 & 0.729 & 150 \cr } $$
Already, just running the test twice gives a pretty good guess for the original number infected.
One of my colleagues pointed out to me a simple way to visualize what's going on. Suppose that we are looking at a population of 1000, 20% of whom are infected:
Then we run a test on it, with 20% sensitivity and 90% specificity.
The regions at the top are those whose tests are false.
Reading Further
An earlier Feature Column about measles
On the governor's tests:
A Wikipedia article on testing
NYT accuracy of tests
Understanding COVID-19 False Positives
by Emily Oster
Reliability-of-covid-19-testing
Big problems with small errors
The effects of communicating uncertainty on public trust in facts and numbers
Medical articles on testing. Many of these are somewhat technical, but still readable. And, to a mathematician, impressive as well as a bit intimidating.
Understanding medical tests: sensitivity, specificity, and positive predictive valueMany cases examined.
Reconstructed diagnostic sensitivity and specificity of the RT-PCR test for COVID-19A medRxiv article, discusses repeat testing.
A survey of PCR tests in the New England Journal of Medicine
FDA results on antigen tests
An interactive testing tool from the British Medical Journal
False-negative results of initial RT-PCR assays for Covid-19: A systematic review
Covid-19 testing projectA survey from the University of San Francisco. Curious graphics.
On the day I submitted this column to the American Mathematical Society for posting, a new antigen test was tentatively approved by the FDA. To be precise, the company Abbott Diagnostics was issued what is called an Emergency Use Authorization (EUA) for its product BinaxNOW. As the FDA web page says, there are now four such tests given this authorization (search for BinaxNOW on that page). The company claims clinical sensitivity 97.1% and specificity 98.5%, but time will tell whether it is in fact the "game-changer" it has been inevitably claimed to be.
You can find out more about this at:
An article at THE VERGE
An article in the Washington Post (search for remarks by Michael Mina)
Remembering Richard Kenneth Guy: Games and Taking on Mountains
This essay is dedicated to the memory of Richard K. Guy (1916-2020) whose life story and accomplishments display a variety of aspects of the complicated landscape of finding fulfilling work on the part of individuals, the concern of society with nourishing those who can produce and use mathematics, and the community of people who have devoted their lives to encourage the flourishing of mathematics.. …
York College (CUNY
What background is required to do research in mathematics? Who produces new mathematical results and why do such individuals do mathematical research? While mathematics has become a profession and there are individuals who, when asked what they do for a living, respond that they are mathematicians, in the grand scheme of human history of mathematics, its being a profession is of quite recent origin. This essay is dedicated to the memory of Richard K. Guy (1916-2020), whose life story and accomplishments display a variety of aspects of the complicated landscape of finding fulfilling work on the part of individuals, the concern of society with nourishing those who can produce and use mathematics, and the community of people who have devoted their lives to encourage the flourishing of mathematics.
Certainly there are many people who today would say they are "mathematicians" rather than that they teach mathematics, are professors of mathematics or work in industry or government as mathematicians. This group is also notable for having typically earned a doctorate degree in mathematics. Note, however, that many of the individuals who are thought of as great mathematicians of the 19th century and early 20th century who were British or Irish (e.g. Arthur Cayley, James Joseph Sylvester, Augustus de Morgan, William Rowan Hamilton) did not earn doctorate degrees. This was because of the uneven history of where doctorate degrees were awarded. Whereas doctorate degrees were awarded in France, Italy, and Germany as early as the 17th century and even in the United States in late 19th century, well into the 20th century it was common for British mathematicians not to get doctorate degrees. Galileo (1564-1642), though known as a physicist by the public, taught mathematics. And there were early universities in Paris and Bologna where mathematics was part of the curriculum and scholars of mathematics were trained.
However, one can also pursue a "career" involving mathematics if one studies mathematics education, physics, chemistry, economics, biology, computer science, statistics, and a multitude of other "academic" preparations for being employed and one builds on one's mathematical skills or abilities. I will return to this issue later after taking a look at Richard Guy's career, noteworthy for the variety of experiences he had in pursuit of his interest and love of mathematics but whose only doctorate was an honorary doctorate. Another thread I will touch on is the view that when mathematics leaps forward it is through the work of relatively young practitioners, and Richard Guy lived and actively worked into old age. He died at 103! Guy was an inspiration for those who love mathematics and are nervous that contributing to mathematics is done only by the relatively young.
Richard Guy–a brief biography
Richard Kenneth Guy was born in England in the town of Nuneaton, which is in the part of the country known as Warwickshire.
Figure 1 (Photo of Richard Guy by Thane Plambeck, from Palo Alto, CA, courtesy of Wikipedia.)
Eventually he made his way to Cambridge University, whose structure is that one studies at one of various "competing" colleges. In the area of mathematics perhaps the best known of the Cambridge Colleges is Trinity College, partly because this was the college associated with Isaac Newton (1642-1727) but also because such other famous mathematicians as Isaac Barrow (1630-1677), Arthur Cayley (1821-1895), G.H. Hardy (1837-1947), Srinivasa Ramanujan (1887-1920), Bertrand Russell (1892-1970), William Tutte (1917-2002), and William Timothy Gowers. However, Guy went to Caius College, though the "official" name of the college is Gonville and Caius College. For a young country like America it is hard to realize that Gonville and Caius was founded in 1348 (well before Christopher Columbus voyaged to the New World). Some other mathematicians or mathematical physicists who were students at Caius and whose name you might recognize were:
George Green (1793-1841), John Venn (1834-1903), Ronald Fisher (1890-1962), Stephen Hawking (1942-2018)
Figure 2 (Photo of Gonville-Caius College at Cambridge, courtesy of Wikipedia.)
John Conway (1937-2020), who died recently was another person well known to the mathematics community who attended Caius College. Conway's and Richard Guy's careers crossed many times! Perhaps Guy's most influential writing was his joint book with Conway and Elwyn Berlekamp, Winning Ways.
Figure 3 (Photo of John Horton Conway.)
Guy served during World War II in the Royal Air Force where he did work related to forecasting the weather–a critical part of the British war effort.
Subsequently, at various times Guy worked as a teacher or took courses at various places. Guy married Nancy Louise Thirian in 1940. She died in 2010. Richard and Louise had three children. His son Michael Guy also did important work in mathematics. Guy relocated from Britain to Singapore in 1950 and taught mathematics at the University of Malaya, where he worked for 10 years. After leaving Singapore he spent time at the new Indian Institute of Technology in Delhi. In 1965, he "settled down" by moving to Canada where he taught in the mathematics department of the University of Calgary. Although he eventually "retired" from the University of Calgary he continued to show up at his office regularly until not long before he died. In Calgary he supervised several doctoral students. The following information is drawn from the Mathematics Genealogy Project
Roger Eggleton, University of Calgary, 1973 Nowakowski, Richard, University of Calgary, 1978 Dan Calistrate, University of Calgary, 1998 Jia Shen, University of Calgary, 2008
In turn Eggleton had at least 8 doctoral students and Nowakowski had at least 10.
While at Calgary, Guy helped organize various mathematics conferences, some of which took place at a location in the mountains not far from Calgary.
Figure 4 (Photo of the Banff Center for the Arts and Creativity, in Banff, Alberta [Canada], site for some conferences organized by Richard Guy. Photo courtesy of Wikipedia.)
Richard Guy and Louise both enjoyed the exercise and challenge of hiking. Well after many people give up heading out on the trail, he and Louise continued to hike. Guy also continued to hike after his wife died. A point of pride for Guy was that a Canadian Alpine Hut was named for him and Louise. The hut is not open year-round, but in a policy that would have pleased Guy and his wife, "to avoid pressuring the bear habitat, the Guy Hut will be closed between May 1 and November 30 annually."
Richard Guy eventually got a doctorate degree but it was an honorary degree awarded by the University of Calgary in 1991, relatively late in his long and very productive career as an active mathematician. Much of Guy's career was in academia but the qualifications for a career in academia vary considerably from one country to another and from one era to another. While most people who are famous for their contributions to mathematics in recent years have earned doctorate degrees, there are some famous examples, including Richard Guy, who found different routes to becoming distinguished contributors to mathematics. Guy, having been born in 1916, presents a relatively common case because of the history of granting the doctorate degree in Great Britain. Let me say something about this situation.
For those attempting to understand the history of mathematics, the development of particular branches of mathematics–geometry, combinatorics, non-associative algebras, partial differential equations, Markov chains, entire functions (a topic in the theory of functions of a complex variable), Banach spaces, etc. a wonderfully useful tool is the previously noted Mathematics Genealogy Project. On this website, searches for British mathematicians yield mathematical ancestors but often the names of the people shown as an ancestor were not a doctoral thesis supervisor but a person who "tutored" or was a primary influence for someone who went on to obtain a reputation in mathematical research but who did not have a doctorate degree. Another historical source of information about individual mathematicians (though not Guy, yet) and various aspects of mathematics is the MacTutor History of Mathematics Archive. Also see this fascinating history of the doctorate in Britain.
Richard Guy's collaborators
MathsciNet lists a large number of people whom Guy collaborated with as an author. He edited quite a few books as well. As mentioned earlier, his best-known book, Winning Ways, was a joint project with John Horton Conway and Elwyn Berlekamp.
Figure 5 (Photo of Elwyn Berlekamp, courtesy of Wikipedia.)
Figure 6 (Photo of John Conway)
But during his long life, Guy had many collaborators. Some of these were "traditional" academics who taught at colleges and universities but two of his collaborators stand out for having benefited mathematics somewhat differently: Paul Erdős (1913-1996) and Martin Gardner (1914-2010).
Paul Erdős was a mathematical prodigy, born in Hungary. He not only showed talent for mathematics as a youngster, but also skill at developing new mathematical ideas and proving new results while quite young. (Musical prodigies show a split, too: those who at an early age can wow one with how well they play the violin or piano but also prodigies like Mozart and Mendelssohn who at amazingly young ages compose new and beautiful music.) Some prodigies continue to wow with their accomplishments as they age but some die young–Mozart died at 35, Schubert at 31, and Mendelssohn at 38. One can only be sad about the wonderful music they would have created had they lived longer, but we can be thankful for all of the wonderful music they did create.
Paul Erdős did not die young but his career did not follow a usual path. Rather than have a career at one academic institution, he became noted for becoming a traveling ambassador for innovative mathematical questions and ideas, particularly in the areas of number theory, combinatorics, and discrete geometry (including graph theory questions). Erdős's creativity and brilliance created interest in how closely connected mathematicians were to Erdős via chains of joint publications (books and jointly edited projects do not count). This leads to the concept of Erdős number. Erdős has Erdős number 0. Those who wrote a paper with Erdős have Erdős number 1 and those who wrote a paper with someone who wrote a paper with Erdős have Erdős number 2. Richard Guy's Erdős number was one, and some might argue that since he wrote 4 papers with Erdős that his Erdős number is 1/4. If you have written a joint paper with a mathematician and are curious to know what your Erdős number is you can find out by using a free "service" of the American Mathematical Society: Find your Erdős number!
Figure 7 (Photo of Paul Erdős, Ron Graham (recently deceased) Erdős number 1, and Fan Chung, Ron Graham's wife–who also has Erdős number 1, courtesy of Wikipedia)
One can compute an Erdős number for people who lived in the distant past. Thus, Carl Gauss has an Erdős number of 4. (Enter names at the link above in the form Euler, Leonard. For this particular name, though, no path can be found!)
Martin Gardner (1914-2010) stands out as perhaps the most unusual promoter of mathematics for the general public, the science community, and research mathematicians in history. Not only did Gardner never earn a doctorate degree in mathematics, he did not earn any degree in mathematics. He did earn a degree in philosophy from the University of Chicago in 1936. Eventually he became involved with writing a mathematical recreations column for Scientific American magazine, and over a period of time these columns were published, anthologized, and augmented to create a constant stream of books that were well-received by the general public. Given the title of the column that Gardner wrote for Scientific American, Mathematical Games, it is not surprising that he drew on the work of Berlekamp, Conway, and Guy. These individuals made suggestions to Gardner for columns to write, and he in turn, when he had ideas for a column/article, often drew on their expertise to "flesh out" his ideas. Over the years, an army of Scientific American readers (including me) were stimulated by Gardner's columns and books to make contributions to problems that Gardner wrote about or to pursue careers related to mathematics. Readers of Gardner's books and columns also branched out and learned about mathematical topics that they had not seen in mathematics they had learned in school.
Figure 8 (Photo of Martin Gardner, courtesy of Wikipedia.)
Grundy's game
It may be helpful to give an example of a combinatorial game that is easy to describe and which is perhaps new to some readers. One of my favorite combinatorial games is known as Grundy's game, named for Patrick Michael Grundy (1917-1959). Like many combinatorial games it is played using a pile of identical coins, beans, or in Figure 9 line segments. The game is an example of an impartial game, which contrasts with partisan games where each player has his/her own pieces, as in chess or checkers.
Figure 9 (An initial configuration for Grundy's game.)
Figure 10 (One can get to this position in Grundy's game from the position in Figure 9 by a legal move.)
Grundy's game works as follows. The game starts with a single pile of stones (here shown as short line segments in a column). The players alternate in making moves. A move consists of taking an existing pile of segments and dividing it into two piles of unequal size. Thus, for the pile in Figure 9, the player making a move from this position could move to a position where there would be one pile of 6 segments and the other of 2 segments. This would be a legal move. A player having to make a move from the position in Figure 10 can NOT make a move based on the pile with two segments, because the only way to divide this pile would create two equal sized piles, which is not allowed. However, there are several moves from this position. Divide the the first pile into piles of size 5 and 1 or of size 4 and 2. The game terminates when a player can't make a move. John Conway promoted this rule of "normal" play in combinatorial games, namely, "if you can't move, you lose." If the termination rule is "if you can't move, you win," usually called misère play, then an analysis of optimal play is much harder, for reasons that are clear.
Rather remarkably, whatever the rules of a two-person impartial game, it turns out that the game is equivalent to playing the game of Nim (see below) with a single pile. Charles Bouton in 1902 found a way for any game of Nim, with one or many piles, to determine from the size of these piles whether the player making the next move would win or lose. In the language of combinatorial game theory, any position with legal moves that remain can be classified as being in an N position or a P position. An N position is one in which the next player to play can win with proper perfect play, while a P position is one where the previous player. having had this position, could win with proper perfect play.
Note, however, for a given game, knowing if a position is an N or a P position does not tell a player how to play optimally–that is, what moves to make to guarantee a win no matter what responses the opponent may make, or, if a position is hopeless, what moves to make to force the opponent to make as many moves as possible to win, and perhaps force a mistake to turn a losing position into a win. For a particular "board" of a combinatorial game that has moves that can be made, it can be thought of as either an N position or a P position.
Nim is played starting from a position with one or more piles of identical objects. A move consists of selecting a single pile and removing any number of objects from the pile, including removing all of the objects from the pile. Thus, treating the segments of Figure 9 as a Nim position, one could remove, 1, 2, 3, 4, 5, 6, 7, or 8 segments from the single pile. This position in Figure 9 is an N position. The player who moves from this position can win by removing all 8 segments. For Figure 10 can you see, with more thought, why this is also an N position. Bouton's solution to checking a Nim position to see its status as an N position or P position involves representing the number of items in each pile as a binary number–binary number notation represents any positive integer using only the symbols 0 and 1. Summarizing what Bouton showed:
a. From a P position, every "move" leads to an N position
b. From an N position, there is some move that leads to a P position
c. P positions have a "value" of 0; N positions have a value that is positive.
The theory that showed all combinatorial two-person impartial games can be shown to have a single Nim value is due independently to the German-born mathematician Roland Percival Sprague (1894-1967) and English mathematician Patrick Michael Grundy (1917-1959). Remarkably, Richard Guy, also independently showed the same result.
Let us come back to Grundy's game. In light of the discussion above, every initial position of n (positive integer) segments (coins, beans) has a Nim value, that is, a single number that represents the "value" of that position. Suppose one looks at the sequence of Nim values for Grundy's game starting from a pile of n objects where each computation can take a fair amount of work. Starting with n = 0 this sequence looks like:
0, 0, 0, 1, 0, 2, 1, 0, 2, 1, 0, 2, 1, 3, 2, 1, 3, 2, 4, 3, …..
Remember that the meaning of a zero is that if it is currently your move, you have lost the game, because you have no move with optimal play by your opponent that will allow you a victory. If the value in the sequence is not zero this means that you have a position that allows you to make a move which will result in a win on the next move or allows you to move to a position from which your opponent cannot win!
What can one say about this sequence? The pattern seems rather irregular and even if you saw a much larger swath of this sequence it would not seem there was a pattern. However, Richard Guy conjectured that this sequence will eventually become "periodic." This means that after neglecting some long initial piece of the sequence at some point a block of numbers, perhaps large in size, will repeat over and over!
Perhaps the work Guy is best known for is the joint work with Berlekamp and Conway on combinatorial games, which resulted in a book through which all three of these men are known to the public–Winning Ways. Taking off from the many new and old games described in this book there have been hundreds of research papers loosely belonging to combinatorial game theory but reaching into every branch of mathematics. While much of Winning Ways is devoted to discrete mathematics (which includes both finite set ideas as well as those involving discrete infinities), the book also treats topics that lead into complex areas related to "infinitely small" and "infinitely large" numbers.
Eventually John Conway wrote about the remarkable ways that studying combinatorial games of various kinds leads to ideas of numbers that go beyond the integers, rational numbers and real numbers that are the most widely studied discrete infinite and "dense" number systems, for which given any two numbers there is some other number of the system between them. This work of Conway is described in the book Surreal Numbers by Donald Knuth which is aimed at a general audience, and in Conway's more technical book On Numbers and Games. Guy's work helped popularize many of Conway's more technical contributions.
There are two major ways mathematics has contributed to understanding "games." One strand of game theory concerns conflict situations that arise in various attempts to understand issues related to economics–different companies or countries take actions to get good outcomes for themselves in situations that involve other "players." The other is combinatorial games like Nim, Grundy's game, and checkers and chess.
Richard Guy's mathematics
Richard Guy's influence as a mathematician comes from a blend of the influence he had through broadcasting the ideas and accomplishments of his collaborators and his role in calling attention to problems which could be understood and attacked with more limited tools than the most abstract parts of mathematics. He did not invent dramatic new tools to attack old problems, or chart out new concepts that would create new fields in mathematics or new avenues for mathematical investigation. He did not get awarded any of the growing number of prizes for dramatic accomplishments in an old field of mathematics (e.g. the work that earned James Maynard the Cole Prize in Number Theory in 2020) or for "lifetime" achievement such as the Abel Prize awarded to Karen Uhlenbeck in 2019–the first woman to win the Abel Prize.
To get an idea of Guy's work, it helps to look at a sample of the titles of the articles he authored and co-authored. As you can see, they reflect the cheerful and playful aspects of his personality that he showed in his personal interactions with people.
All straight lines are parallel
The nesting and roosting habits of the laddered parenthesis
What drives an aliquot sequence?
John Isbell's game of beanstalk and John Conway's game of beans-don't-talk
Primes at a glance
The second strong law of small numbers
Graphs and the strong law of small numbers
Nu-configurations in tiling the square
The primary pretenders
Catwalks, sandsteps and Pascal pyramids
The number-pad game
Don't try to solve these problems!
Some monoapparitic fourth order linear divisibility sequences
Richard Rick's tricky six puzzle: S5 sits specially in S6
A dozen difficult Diophantine dilemmas
New pathways in serial isogons
His most important books were:
Berlekamp, E. and J. Conway, R. Guy, Winning Ways for Your Mathematical Plays, Two volumes, Academic Press (1982)
Berlekamp, E. and J. Conway, R. Guy, Winning Ways for Your Mathematical Plays, Four volumes, AK Peters (2004)
Guy, Richard. Unsolved problems in number theory, Springer Science & Business Media, 2004 (with 2546 citations on Google Scholar as of July, 2020)
Croft, Hallard T. and Kenneth Falconer, Richard K. Guy, Unsolved problems in geometry, Springer Science & Business Media, 2012
These problem books have been reissued at times to update progress on old problems and give new problems. They have resulted in gigantic progress on a wide array of number theory and geometry problems.
Different countries over the years have had varying "success stories" in reaching a general public about the delights of mathematics. In America one of those success stories was Martin Gardner's contributions mentioned above. In the Soviet Union a different intriguing phenomenon emerged. In America, the typical way one reached students in schools was to provide the students with materials that were not written by especially distinguished research mathematicians but rather by materials that "translated" what researchers had done and or thought about down to the level of the students. But in the Soviet Union MIR Publishers developed a series of books by very distinguished mathematicians and aimed directly for students. These books, originally published in Russian, were eventually made available to an English-speaking audience by a variety of publishers by translating books of these Soviet authors into English. There was the Little Mathematics Library which was marketed through MIR, there was Popular Lectures in Mathematics which were distributed by the University of Chicago, (under an NSF project lead by Izaak Wirszup), there was Topics in Mathematics published by Heath Publishing and another version published via Blaisdell Publishing (which was a division of Random House). Some of the Soviet books appeared in several of these series. While not up-to-date about some of the topics that were treated, to this day these books are remarkable examples of exposition. They are books I keep coming back to and which always reward me by another look. Perhaps my personal favorite is Equivalent and Equideomposable Figures, by Y. G. Boltyanskii. This book is an exposition of the amazing theorem now often called the Bolyai-Gerwien-Wallace Theorem. In brief this theorem says that if one has two plane simple polygons of the same area, one can cut up the first polygon with straight line cuts into a finite number of pieces which can be reassembled to form the second polygon, in the style of a jigsaw puzzle. Figure 11 shows a way to cut a square into four pieces and reassemble the parts into an equilateral triangle of the area of the original square!! Lots of current work is to find the minimal number of pieces needed to carry out dissections between two interesting polygon shapes of the same area.
Figure 11 (A square cut into 4 pieces which have been reassembled to form an equilateral triangle of the same area. Image courtesy of Wikipedia.)
In the mid-1980s, the Consortium for Mathematics and Its Applications (COMAP) approached the National Science Foundation about creating a similar series of books as those pioneered in the Soviet Union. Figure 12 shows the cover of one such book developed and written by Richard Guy, aimed at pre-college students and devoted to combinatorial games. Guy's book Fair Game was published in 1989. His manuscript was submitted as a handwritten text which had to be retyped prior to publishing!
Figure 12 (Cover of the book Fair Game, by Richard Guy. Image courtesy of the Consortium for Mathematics and its Applications.)
Richard Guy sometimes referred to himself as an "amateur" mathematician.
While he wrote many papers and books, his legacy is not the theorems that he personally proved but his role as someone who loved mathematics and who was a mathematics enthusiast–writing about, teaching about, and doing mathematics. His wonderful promotion of mathematics will live on long after those who knew him personally are gone.
Those who can access JSTOR can find some of the papers mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic information and reviews of some of these materials. Some of the items above can be found via the ACM Digital Library, which also provides bibliographic services.
Some books which Richard Guy authored, co-authored or edited:
Guy, R., The Book of Numbers (joint with John Horton Conway), Springer-Verlag, 1996.
Guy, R., Fair Game, Consortium for Mathematics and Its Applications, 1989.
Guy, R.. and R. Woodrow, The Lighter Side of Mathematics, Proc. E. Strens Memorial Conf. on Recr. Math. and its History, Calgary. 1986.
Guy, Richard. Unsolved problems in number theory. Vol. 1. Springer Science & Business Media, 2004.
other references:
Albers, D. and G. Alexanderson (eds.), Fascinating Mathematical People, Princeton U. Press, 2011. (Includes Richard Guy.)
Albert, M. and R. Nowakowski, (eds), Games of No Chance 3, Cambridge U. Press, NY, 2009.
Conway, J., On Numbers and Games, 2nd. edition, AK Peters, Natick, 2001.
Croft, H. and Kenneth Falconer, Richard K. Guy. Unsolved problems in geometry: unsolved problems in intuitive mathematics. Vol. 2. Springer Science & Business Media, 2012.
Erdős, Paul, and Richard K. Guy, Crossing number problems, The American Mathematical Monthly 80 (1973): 52-58.
Guy, R.K. and Smith, C.A., The G-values of various games. In Mathematical Proceedings of the Cambridge Philosophical Society (Vol. 52, No. 3, pp. 514-526). Cambridge University Press, 1956.
Guy, R. K., & Nowakowski, R. J. (1995). Coin-weighing problems. The American mathematical monthly, 102(2), 164-167.
Nowakowski, R. (ed.), Games of No Chance, Cambridge U. Press, 1996.
Nowakowski, R. (ed)., More Games of No Chance, Cambridge U. Press, 2002.
Quantifying Injustice
In: 2020, Ursula Whitcher
Tagged: Machine learning, Predictive policing, PredPol
Just as a YouTube algorithm might recommend videos with more and more extremist views, machine learning techniques applied to crime data can magnify existing injustice. …
Ursula Whitcher
AMS | Mathematical Reviews, Ann Arbor, Michigan
What is predictive policing?
Predictive policing is a law enforcement technique in which officers choose where and when to patrol based on crime predictions made by computer algorithms. This is no longer the realm of prototype or thought experiment: predictive policing software is commercially available in packages with names such as HunchLab and PredPol, and has been adopted by police departments across the United States.
Algorithmic advice might seem impartial. But decisions about where and when police should patrol are part of the edifice of racial injustice. As the political scientist Sandra Bass wrote in an influential 2001 article, "race, space, and policing" are three factors that "have been central in forwarding race-based social control and have been intertwined in public policy and police practices since the earliest days" of United States history.
One potential problem with predictive policing algorithms is the data used as input. What counts as a crime? Who is willing to call the police, and who is afraid to report? What areas do officers visit often, and what areas do they avoid without a specific request? Who gets pulled over, and who is let off with a warning? Just as a YouTube algorithm might recommend videos with more and more extremist views, machine learning techniques applied to crime data can magnify existing injustice.
Measuring bias in predictive policing algorithms
Dr. William Isaac
In 2016, two researchers, the statistician Kristian Lum and the political scientist William Isaac, set out to measure the bias in predictive policing algorithms. They chose as their example a program called PredPol. This program is based on research by the anthropologist P. Jeffrey Brantingham, the mathematician Andrea Bertozzi, and other members of their UCLA-based team. The PredPol algorithm was inspired by efforts to predict earthquakes. It is specifically focused on spatial locations, and its proponents describe an effort to prevent "hotspots" of concentrated crime. In contrast to many other predictive policing programs, the algorithms behind PredPol have been published. Such transparency makes it easier to evaluate a program's effects and to test the advice it would give in various scenarios.
Dr. Kristian Lum
Lum and Isaac faced a conundrum: if official data on crimes is biased, how can you test a crime prediction model? To solve this technique, they turned to a technique used in statistics and machine learning called the synthetic population.
The term "synthetic population" brings to mind a city full of robots, or perhaps Blade Runner-style androids, but the actual technique is simpler. The idea is to create an anonymized collection of profiles that has the same demographic properties as a real-world population.
A synthetic population isn't about Blade Runner. (Photo by Zach Chisholm, CC BY 2.0.)
For example, suppose you are interested in correlations between choices for a major and favorite superhero movies in a university's freshman class. A synthetic population for a ten-person freshman seminar might look something like this:
Education major; Thor: Ragnarok
Education major; Wonder Woman
History major; Wonder Woman
Math major; Black Panther
Music major; Black Panther
Music major; Thor: Ragnarok
Undeclared; Black Panther
Undeclared; Thor: Ragnarok
Undeclared; Wonder Woman
This is a toy model using just a couple of variables. In practice, synthetic populations can include much more detail. A synthetic population of students might include information about credits completed, financial aid status, and GPA for each individual, for example.
Lum and Isaac created a synthetic population for the city of Oakland. This population incorporated information about gender, household income, age, race, and home location, using data drawn from the 2010 US Census. Next, they used the 2011 National Survey on Drug Use and Health (NSDUH) to estimate the probability that somebody with a particular demographic profile had used illegal drugs in the past year, and randomly assigned each person in the synthetic population to the status of drug user or non-user based on this probabilistic model. They noted that this assignment included some implicit assumptions. For example, they were assuming that drug use in Oakland paralleled drug use nationwide. However, it's possible that local public health initiatives or differences in regulatory frameworks could affect how and when people actually use drugs. They also pointed out that some people lie about their drug use on public health surveys; however, they reasoned that people have less incentive to lie to public health workers than to law enforcement.
A West Oakland transit stop. (Photo by Thomas Hawk, CC BY-NC 2.0.)
According to Lum and Isaac's probabilistic model, individuals living anywhere in Oakland were likely to use illegal drugs at about the same rate. Though the absolute number of drug users was higher in some locations than others, this was due to greater population density: more people meant more potential drug users. Lum and Isaac compared this information to data about 2010 arrests for drug possession made by the Oakland Police Department. Those arrests were clustered along International Boulevard and in an area of West Oakland near the 980 freeway. The variations in arrest levels were significant: Lum and Isaac wrote that these neighborhoods "experience about 200 times more drug-related arrests than areas outside of these clusters." These were also neighborhoods with higher proportions of non-white and low-income residents.
The PredPol algorithm predicts crime levels in grid locations, one day ahead, and flags "hotspots" for extra policing. Using the Oakland Police crime data, Lum and Isaac generated PredPol crime "predictions" for every day in 2011. The locations flagged for extra policing were the same locations that already had disproportionate numbers of arrests in 2010. Combining this information with their demographic data, Lum and Isaac found that Black people were roughly twice as likely as white people to be targeted by police efforts under this system, and people who were neither white nor Black were one-and-a-half times as likely to be targeted as white people. Meanwhile, estimated use of illegal drugs was similar across all of these categories (white people's estimated drug use was slightly higher, at just a bit more than 15%).
A poster reports police brutality on International Blvd. in Oakland. (Photo by Evan Hamilton, CC BY-NC 2.0.)
This striking disparity is already present under the assumption that increased police presence does not increase arrests. When Lum and Isaac modified their simulation to add arrests in targeted "hotspots," they observed a feedback effect, in which the algorithm predicted more and more crimes in the same places. In turn, this leads to more police presence and more intense surveillance of just a few city residents.
In a follow-up paper, the computer scientists Sorelle Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian worked with a pair of University of Utah undergraduate students to explore feedback effects. They found that if crime reports were weighted differently, with crime from areas outside the algorithm's "hotspots" given more emphasis, intensified surveillance on just a few places could be avoided. But such adjustments to one algorithm cannot solve the fundamental problem with predictions based on current crime reports. As Lum and Isaac observed, predictive policing "is aptly named: it is predicting future policing, not future crime."
Sandra Bass, "Policing Space, Policing Race: Social Control Imperatives and Police Discretionary Decisions," Social Justice, Vol. 28, No. 1 (83), Welfare and Punishment In the Bush Era (Spring 2001), pp. 156-176 (JSTOR.)
Danielle Ensign, Sorelle A. Friedler, Scott Neville, Carlos Scheidegger and Suresh Venkatasubramanian. Runaway Feedback Loops in Predictive Policing, Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*), 2018.
Kristian Lum and William Isaac, To predict and serve? Significance, October 10, 2016. (The Royal Statistical Society.)
Cathy O'Neil, Weapons of Math Destruction.
Transmitting Data with Polar Codes
This then is the significance of Arikan's polar codes: they provide encodings for an important class of channels that enable us to transmit information at the greatest possible rate and with an arbitrarily small error rate. …
Wireless communication is currently transitioning to a new 5G standard that promises, among other advantages, faster speeds. One reason for the improvement, as we'll explain here, is the use of polar codes, which were first introduced by Erdal Arikan in 2009 and which are optimal in a specific information-theoretic sense.
This column is a natural continuation of an earlier column that described Shannon's theory of information. To quickly recap, we considered an information source $X$ to be a set of symbols $\cx$, such as letters in an alphabet, that are generated with a specific frequency $p(x)$. The amount of information generated by this source is measured by the Shannon entropy: $$ H(X) = -\sum_x p(x)\lg p(x). $$
A simple example is an information source consisting of two symbols, such as 0, which is generated with probability $p$, and 1, generated with probability $1-p$. In this case, the Shannon entropy is $$ H(p) = – p\lg(p) -(1-p)\lg(1-p). $$
We described $H(X)$ as a measure of the freedom of expression created by the source. If $p=1$, then the source can only generate a string of 0's, so there is only one possible message created by this source; it therefore conveys no information. On the other hand, when $p=1/2$, each symbol is equally possible so any string of 0's and 1's is equally likely. There are many possible messages, each of which is equally likely, so a given message can contain a specific meaning, and we think of this as a high-information source.
Entropy is measured in units of bits per symbol. However, if we imagine that one symbol is generated per unit time, we may also think of entropy–measured in bits per unit time–as measuring the rate at which information is generated. We earlier saw how this interpretation helped us determine the maximum rate at which messages generated by the source could be transmitted over a noiseless channel, a fact that is particularly relevant for us here.
Polar codes were created to maximize the rate at which information can be transmitted through a noisy channel, one that may introduce errors in transmission. So before we introduce these codes, we will first describe how Shannon's theory tells us the maximum rate at which information can be transmitted over a noisy channel,
Transmission over a noisy channel
Let's consider a channel $W$ through which we send a symbol $x$ and receive a symbol $y$. Ideally, we receive the same symbol that we send, or at least, the sent symbol can be uniquely recovered from the received symbol. In practice, however, this is not always the case.
For example, the following channel sends symbols from the set $\cx=\{0,1\}$ and receives symbols in $\cy=\{0,1\}$. However, there is a chance that the symbol is corrupted in transmission. In particular, the probability is $p$ that we receive the same symbol we send and $1-p$ that we receive the other symbol. Such a channel is called a binary symmetric channel, which we'll denote as $BSC(p)$.
Our central problem is to understand the maximum rate at which information can be transmitted through such a channel and to find a way to do so that minimizes errors.
To this end, Shannon generalizes the concept of entropy. If $X$ is an information source whose symbols are sent through $W$ and $Y$ is the information source of received symbols, we have joint probabilities $p(x,y)$ that describe the frequency with which we send $x$ and receive $y$. There are also the conditional probabilities $p(y|x)$, the probability that we receive $y$ assuming we have sent $x$, and $p(x|y)$, the probability that we sent $x$ assuming we received $y$. These give conditional entropies $$ \begin{aligned} H(Y|X) = & -\sum_{x,y} p(x,y)\lg p(y|x) \\ H(X|Y) = & -\sum_{x,y} p(x,y)\lg p(x|y). \end{aligned} $$
We are especially interested in $H(X|Y)$, which Shannon calls the channel's equivocation. To understand this better, consider a fixed $y$ and form $$ H(X|y) = -\sum_x p(x|y) \lg p(x|y), $$ which measures our uncertainty in the sent symbol $x$ if we have received $y$. The equivocation is the average of $H(X|y)$ over the received symbols $y$: $$ H(X|Y) = \sum_y p(y) H(X|y). $$ Thinking of entropy as a measure of uncertainty, equivocation measures the uncertainty we have in reconstructing the sent symbols from the received. We can also think of it as the amount of information lost in transmission.
For example, suppose that our channel is $BSC(1)$, which means we are guaranteed that $y=x$, so the received symbol is the same as the transmitted symbol. Then $p(x|y) = 1$ if $y=x$ and $p(x|y) = 0$ otherwise, which leads us to conclude that $H(X|Y) = 0$. In this case, the equivocation is zero, and no information is lost.
On the other hand, working with $BSC(1/2)$, we are as likely to receive a 0 as we are a 1, no matter which symbol is sent. It is therefore impossible to conclude anything about the sent symbol so all the information is lost. A simple calculation shows that the equivocation $H(X|Y) = H(X)$.
The difference $H(X) – H(X|Y)$, which describes the amount of information generated minus the amount lost in transmission, measures the amount of information transmitted by the channel. Shannon therefore defines the capacity of a noisy channel $W$ to be the maximum $$ I(W) = \max_X[H(X) – H(X|Y)] $$ over all information sources using the set of symbols $\cx$.
For example, if $W = BSC(1)$, we have $H(X|Y) = 0$ so $$ I(BSC(1)) = \max_X[H(X)] = 1, $$ which happens when the symbols 0 and 1 appear with equal frequency. The capacity of this channel is therefore 1 bit per unit time.
However, if $W=BSC(1/2)$, we have $H(X|Y) = H(X)$ so $$ I(BSC(1/2)) = \max_X[H(X) – H(X|Y)] = 0, $$ meaning this channel has zero capacity. No information can be transmitted through it.
The term capacity is motivated by what Shannon calls the Fundamental Theorem of Noisy Channels:
Suppose $X$ is an information source whose entropy is no more than the capacity of a channel $W$; that is, $H(X) \leq I(W)$. Then the symbols of $X$ can be encoded into $\cx$, the input symbols of $W$, so that the source can be transmitted over the channel with an arbitrarily small error rate. If $H(X) \gt I(W)$, there is no such encoding.
This result may initially appear surprising: how can we send information over a noisy channel, a channel that we know can introduce errors, with an arbitrarily small rate of errors? As an example, consider the channel $W$ with inputs and outputs $\cx=\cy=\{a,b,c,d\}$ and whose transmission probabilities are as shown:
Remembering that $I(W) = \max_X[H(X) – H(X|Y)]$, we find with a little work that the maximum occurs when the symbols of $X$ appear equally often so that $H(X) = 2$. It turns out that the equivocation $H(X|Y) = 1$, which implies that the capacity $I(W) = 1$ bit per unit time.
Suppose now that we have the information source whose symbols are $\{0,1\}$, where each symbol occurs with equal frequency. We know that $H(X) = 1$ so Shannon's theorem tells us that we should be able to encode $\{0,1\}$ into $\{a,b,c,d\}$ with an arbitrarily small error rate. In fact, the encoding $$ \begin{aligned} 0 & \to a \\ 1 & \to c \end{aligned} $$ accomplishes this goal. If we receive either $a$ or $b$, we are guaranteed that $0$ was sent; likewise, if we receive either $c$ or $d$, we know that $1$ was sent. It is therefore possible to transmit the information generated by $X$ through $W$ without errors.
This example shows that we can use a noisy channel to send error-free messages by using a subset of the input symbols. Another technique for reducing the error rate is the strategic use of redundancy. As a simple example, suppose our channel is $BSC(0.75)$; sending every symbol three times allows us to reduce the error rate from 25% to about 14%.
Unfortunately, the proof of Shannon's theorem tells us that an encoding with an arbitrarily small error rate exists, but it doesn't provide a means of constructing it. This then is the significance of Arikan's polar codes: they provide encodings for an important class of channels that enable us to transmit information at the greatest possible rate and with an arbitrarily small error rate. As we will see, the strategy for creating these codes is a creative use of redundancy.
Symmetric, binary-input channels
Before describing Arikan's polar codes, let's take a moment to clarify the type of channels $W:\cx\to\cy$ we will be working with. The inputs are $\cx=\{0,1\}$ and we require the channel to be symmetric, which means there is a permutation $\pi:\cy\to\cy$ such that $\pi^{-1} = \pi$ and $p(\pi(y)|1) = p(y|0)$. Such a channel is called a symmetric, binary-input channel. The symmetry condition simply ensures that the result of transmitting 0 is statistically equivalent to transmitting 1.
A typical example is the binary symmetric channel $BSC(p):\{0,1\}\to\{0,1\}$ that we described earlier. The symmetry condition is met with the permutation $\pi$ that simply interchanges 0 and 1.
There are two quantities associated to a symmetric channel $W$ that will be of interest.
The first, of course, is the capacity $I(W)$, which measures the rate at which information can be transmitted through the channel. It's not hard to see that the capacity $$ I(W) = \max_X[H(X) – H(X|Y)] $$ is found with the information source $X$ that generates 0 and 1 with equal probability. This means that $p(x) = 1/2$ for both choices of $x$.
Remembering that conditional probabilities are related by $$ p(x|y)p(y) = p(x,y) = p(y|x)p(x), $$ we have $$ \frac{p(x)}{p(x|y)} = \frac{p(y)}{p(y|x)}. $$ This implies that $$ \begin{aligned} -\sum_{x,y} p(x,y) & \lg p(x) + \sum_{x,y}p(x,y)\lg p(x|y) = \\ & -\sum_{x,y} p(x,y) \lg p(y) + \sum_{x,y}p(x,y)\lg p(y|x) \\ \end{aligned} $$ and so $$ H(X) – H(X|Y) = H(Y) – H(Y|X). $$
Since $p(x) = \frac12$, we also have that $p(y) = \frac12 p(y|0) + \frac12 p(y|1)$ and $p(x,y) = \frac12 p(y|x)$. This finally leads to the expression $$ I(W) = \sum_{x,y} \frac12 p(y|x) \lg \frac{p(y|x)}{\frac12 p(y|0) + \frac12p(y|1)}. $$ This is a particularly convenient form for computing the capacity since it is expressed only in terms of the conditional probabilities $p(y|x)$.
In particular, for the binary symmetric channel, we find $$ I(BSC(p)) = 1-H(p) = 1-p\lg(p) – (1-p)\lg(1-p), $$ reinforcing our earlier observation that $H(p)$ is a measure of the information lost in the noisy channel.
Since the capacity of a symmetric, binary input channel is computed as $\max_X[H(X) – H(X|Y)]$ where $H(X) \leq 1$, it follows that $0\leq I(W) \leq 1$. When $I(W) = 1$, the equivocation $H(Y|X) = 0$ so no information is lost and this is a perfect channel. In the example above, this happens when $p=0$ or $1$.
At the other extreme, a channel for which $I(W) = 0$ is useless since no information is transmitted.
A second quantity associated to a symmetric channel $W$ measures the reliability with which symbols are transmitted. Given a received symbol $y$, we can consider the product $$ p(y|0)~p(y|1) $$ as a reflection of the confidence we have in deducing the sent symbol when $y$ is received. For instance, if this product is 0, one of the conditional probabilities is zero, which means we know the sent symbol when we receive $y$.
We therefore define the Bhattacharyya parameter $$ Z(W) = \sum_y \sqrt{p(y|0)~p(y|1)} $$ as a measure of the channel's reliability. It turns out that $0\leq Z(W)\leq 1$, and lower values of $Z(W)$ indicate greater reliability. For instance, when $Z(W) = 0$, then every product $p(y|0)~p(y|1) = 0$, which means the symbols are transmitted without error.
We expect $Z(W)$, the channel's reliability, to be related to $I(W)$, the rate of transmission. Indeed, Arikan shows that $Z(W)\approx 0$ means that $I(W)\approx 1$ and that $Z(W)\approx 1$ means that $I(W)\approx 0$. So a high-quality channel will have $I\approx 1$ and $Z\approx 0$, and a poor channel will have $I\approx 0$ and $Z\approx 1$.
For the binary symmetric channel $BSC(p)$, we see the following relationship:
Besides the binary symmetric channel, another example of a symmetric, binary-input channel is the binary erasure channel $W:\{0,1\}\to\{0,*,1\}$ as shown below. We think of the received symbol $*$ as an erasure, meaning the symbol was received yet we do not know what it is, so $\ep$ represents the probability that a sent symbol is erased.
Once again, the permutation $\pi$ that interchanges 0 and 1 while fixing $*$ confirms this as a symmetric channel that we denote as $BEC(\ep)$. The diagram gives the conditional probabilities that we need to find the capacity and Bhattacharyya parameter $$ \begin{aligned} I(BEC(\ep)) & = 1-\ep \\ Z(BEC(\ep)) & = \ep. \end{aligned} $$ Notice that if $\ep=0$, there is no possibility of an erasure and the capacity $I(BEC(0)) = 1$ bit per unit time. On the other hand, if $\ep=1$, every symbol is erased so we have $I(BEC(1)) = 0$ telling us this channel can transmit no information.
Polarization: A first step
Polar codes are constructed through a recursive process, the first step of which we'll now describe. Beginning with a channel $W$, we'll bundle together two copies of $W$ into a vector channel $W_2:\cx^2\to\cy^2$. Then we'll pull the vector channel apart into two symmetric, binary-input channels $\chan{1}{2}$ and $\chan{2}{2}$ and observe how the capacity of these two channels is distributed.
In what follows, we will be considering a number of different channels. We will therefore denote the conditional probabilities of a particular channel using the same symbol as the channel. For instance, if $W=BEC(\ep)$, we will write, say, the conditional probability $W(*|1)=\ep$.
Our vector channel $W_2:\cx^2\to\cy^2$ begins with a two-dimensional input $(u_1,u_2)$ and forms $(x_1,x_2) = (u_1\oplus u_2, u_2)$, where $\oplus$ denotes integer addition modulo 2. Then $x_1$ and $x_2$ are transmitted through $W$, as shown below.
This gives the transmission probabilities $$ W_2((y_1,y_2) | (u_1,u_2)) = W(y_1|u_1\oplus u_2)W(y_2|u_2). $$ It is fairly straightforward to see that $I(W_2) = 2I(W)$ so that we have not lost any capacity.
From here, we obtain two channels: $$ \begin{aligned} \chan{1}{2}&: \cx\to\cy^2 \\ \chan{2}{2}&: \cx\to\cy^2\times \cx. \end{aligned} $$ whose transition probabilities are defined by $$ \begin{aligned} \chan{1}{2}((y_1,y_2)|u_1) & = \frac12 \left(W_2((y_1,y_2)|(u_1,0)) + W_2((y_1,y_2)|(u_1,1))\right) \\ \chan{2}{2}(((y_1,y_2),u_1)|u_2) & = \frac12 W_2((y_1,y_2)|(u_1,u_2)). \end{aligned} $$ This may look a little daunting, but we'll soon look at an example illustrating how this works.
The important point is that we can verify that the total capacity is preserved, $$ I(\chan{1}{2})+I(\chan{2}{2}) = I(W_2) = 2I(W), $$ and redistributed advantageously $$ I(\chan{1}{2})\leq I(W) \leq I(\chan{2}{2}). $$ That is, $\chan{1}{2}$ surrenders some of its capacity to $\chan{2}{2}$, pushing $\chan{1}{2}$ toward a useless channel and $\chan{2}{2}$ toward a perfect channel.
Let's consider what happens when our channel is a binary erasure channel $BEC(\ep)$.
Working out the transition probabilities for $\chan{1}{2}$, we have $$ \begin{array}{c||c|c} (y_1,y_2) \backslash x & 0 & 1 \\ \hline (0,0) & \frac12(1-\ep)^2 & 0 \\ (0,1) & 0 & \frac12(1-\ep)^2 \\ (0,*) & \frac12(1-\ep)\ep & \frac12(1-\ep)\ep \\ (1,0) & 0 & \frac12(1-\ep)^2 \\ (1,1) & \frac12(1-\ep)^2 & 0 \\ (1,*) & \frac12(1-\ep)\ep & \frac12(1-\ep)\ep \\ (*,0) & \frac12(1-\ep)\ep & \frac12(1-\ep)\ep \\ (*,1) & \frac12(1-\ep)\ep & \frac12(1-\ep)\ep \\ (*,*) & \ep^2 & \ep^2 \\ \end{array} $$ Rather than focusing on the individual probabilities, notice that five of the nine received symbols satisfy $\chan{1}{2}(y|0)~\chan{1}{2}(y|1) \neq 0$. More specifically, if either $y_1=*$ or $y_2=*$, it is equally likely that 0 or 1 was sent.
This observation is reflected in the fact that the capacity and reliability have both decreased. More specifically, we have $$ \begin{aligned} I(\chan{1}{2}) = (1-\ep)^2 & \leq (1-\ep) = I(W) \\ Z(\chan{1}{2}) = 2\ep-\ep^2 & \geq \ep = Z(W), \\ \end{aligned} $$ where we recall that larger values of the Bhattacharyya parameter indicate less reliability.
Let's compare this to the second channel $\chan{2}{2}$. The received symbols are now $\cy^2\times \cx$. If we work out the conditional probabilities for half of them, the ones having the form $((y_1,y_2),0)$, we find $$ \begin{array}{c||c|c} ((y_1,y_2),0) \backslash x & 0 & 1 \\ \hline ((0,0),0) & \frac12(1-\ep)^2 & 0 \\ ((0,1),0) & 0 & 0 \\ ((0,*),0) & \frac12(1-\ep)\ep & 0 \\ ((1,0),0) & 0 & \frac12(1-\ep)^2 \\ ((1,1),0) & 0 & 0 \\ ((1,*),0) & 0 & \frac12(1-\ep)\ep \\ ((*,0),0) & \frac12(1-\ep)\ep & 0 \\ ((*,1),0) & 0 & \frac12(1-\ep)\ep \\ ((*,*),0) & \frac12\ep^2 & \frac12\ep^2 \\ \end{array} $$ This channel behaves much differently. For all but one received symbol, we can uniquely determine the sent symbol $x$. In particular, we can recover the sent symbol even if one of the received symbols has been erased. This leads to an increase in the capacity and a decrease in the Bhattacharyya parameter: $$ \begin{aligned} I(\chan{2}{2}) = 1-\ep^2 & \geq 1-\ep = I(W) \\ Z(\chan{2}{2}) = \ep^2 & \leq \ep = Z(W). \\ \end{aligned} $$
The capacities of these channels, as a function of $\epsilon$, are shown below.
It's worth thinking about why the second channel $\chan{2}{2}$ performs better than the original. While the vector channel $W_2:\cx^2\to\cy^2$ transmits the pair $(u_1,u_2)$, $\chan{2}{2}$ assumes that $u_1$ is transmitted without error so that the received symbol is $((y_1,y_2), u_1)$. Since we know $u_1$ arrives safely, we are essentially transmitting $u_2$ through $W$ twice. It is this redundancy that causes the capacity and reliability to both improve. In practice, we will need to deal with our assumption that $u_1$ is transmitted correctly, but we'll worry about that later.
If we had instead considered a binary symmetric channel $W = BSC(p)$, we find a similar picture:
Polarization: The recursive step
We have now taken the first step in the polarization process: We started with two copies of $W$ and created two new polarized channels, one with improved capacity, the other with diminished capacity. Next, we will repeat this step recursively so as to create new channels, some of which approach perfect channels and some of which approach useless channels.
When $N$ is a power of 2, we form the vector channel $W_{N}:\cx^{N}\to\cy^{N}$ from two copies of $W_{N/2}$. The recipe to create $W_4$ from two copies of $W_2$ is shown below.
From the inputs $(u_1, u_2, \ldots, u_{N})$, we form $(s_1,\ldots, s_{N})$ as $s_{2i-1} = u_{2i-1}\oplus u_{2i}$ and $s_{2i} = u_{2i}$. We then use a reverse shuffle permutation to form $$ (v_1,v_2,\ldots,v_{N}) = (s_1, s_3, \ldots, s_{N-1}, s_2, s_4,\ldots, s_{N}), $$ which serve as the inputs for two copies of $W_{N/2}$.
While that may sound a little complicated, the representation of $W_8$ shown below illustrates the underlying simplicity.
We now form new symmetric, binary-input channels $$ \chan{i}{N}:\cx\to \cy^N\times \cx^{i-1} $$ as we did before: $$ \begin{aligned} \chan{i}{N}(((y_1,\ldots,y_N),& (u_1,\ldots,u_{i-1}))~|~u_i) = \\ & \frac1{2^{N-1}}\sum_{(u_{i+1},\ldots,u_N)} W_N((y_1,\ldots,y_N)|(u_1,\ldots,u_N)). \end{aligned} $$
Arikan then proves a remarkable result:
If we choose $\delta\in(0,1)$, then as $N$ grows through powers of two, the fraction of channels satisfying $I(\chan{i}{N}) \in (1-\delta, 1]$ approaches $I(W)$. Likewise, the fraction of channels with $I(\chan{i}{N}) \in [0,\delta)$ approaches $1-I(W)$.
Suppose, for instance, that we begin with a channel whose capacity $I(W) = 0.75$. As we repeat the recursive step, we find that close to 75% of the channels are nearly perfect and close to 25% are nearly useless.
More specifically, with the binary erasure channel $BEC(0.25)$, whose capacity is 0.75, and $N=1024$, the channel capacities are as shown below. About three-quarters of the channels are close to perfect and about one-quarter are close to useless.
For comparison, here's the result with $BEC(0.75)$, whose capacity is 0.25.
It should be no surprise that the channels $\chan{i}{N}$ with small $i$ are close to useless while the ones with large $i$ are close to perfect. Remember that $\chan{i}{N}$ assumes that the symbols $(u_1,\ldots,u_{i-1})$ are transmitted without error. When $i$ is large, we know that many of the symbols are reliably transmitted so the channel transmits $u_i$ with a lot of redundancy. This redundancy causes an increase in channel capacity $I(\chan{i}{N})$ and a corresponding decrease in our reliability measure $Z(\chan{i}{N})$.
Encoding and decoding
Now that we have a large number of nearly perfect channels, we can describe how data is encoded and decoded to enable transmission through them. For some large value of $N$, we have approximately $NI(W)$ nearly perfect channels and $N-NI(W)$ nearly useless channels. Our goal is to funnel all the data through the nearly perfect channels and ignore the others.
To illustrate, consider the channel $W = BEC(0.5)$ with $I(W) = 0.5$, and suppose we have created $N=8$ channels. This is a relatively small value of $N$, but we expect $NI(W) = 4$ nearly perfect channels and $N-NI(W)=4$ nearly useless channels. The capacity of the 8 channels is as shown here.
We will choose channels 4, 6, 7, and 8 to transmit data. To do so, we will fix values for $u_1$, $u_2$, $u_3$, and $u_5$ and declare these to be "frozen" (these are polar codes, after all). Arikan shows that any choice for the frozen bits works equally well; this shouldn't be too surprising since their corresponding channels transmit hardly any information. The other symbols $u_4$, $u_6$, $u_7$, and $u_8$ will contain data we wish to transmit.
Encoding the data means evaluating the $x_i$ in terms of the $u_j$. As seen in the figure, we perform $N/2=4$ additions $\lg(N) = 3$ times, which illustrates why the complexity of the encoding operation is $O(N\log N)$.
Now we come to the problem of decoding: if we know ${\mathbf y} = (y_1,\ldots, y_N)$, how do we recover ${\mathbf u} = (u_1,\ldots, u_N)$? We decode ${\mathbf y}$ into the vector $\hat{\mathbf u} = (\hat{u}_1,\ldots,\hat{u}_N)$ working from left to right.
First, we declare $\hat{u}_i = u_i$ if $u_i$ is frozen. Once we have found $(\hu_1, \ldots, \hu_{i-1})$, we define the ratio of conditional probabilities $$ h_i = \frac{\chan{i}{N}(({\mathbf y}, (\hu_1,\ldots,\hu_{i-1}))|0)} {\chan{i}{N}(({\mathbf y}, (\hu_1,\ldots,\hu_{i-1}))|1)}. $$ If $h_i \geq 1$, then $u_i$ is more likely to be 0 than 1 so we define $$ \hu_i = \begin{cases} 0 & \mbox{if}~ h_i \geq 1 \\ 1 & \mbox{else.} \end{cases} $$
Two observations are important. First, suppose we begin with ${\mathbf u}=(u_1,\ldots,u_N)$ and arrive at $\hat{\mathbf u} = (\hu_1,\ldots,\hu_N)$ after the decoding process. If $\hat{\mathbf u} \neq {\mathbf u}$, we say that a block error has occurred. Arikan shows that, as long as the fraction of non-frozen channels is less than $I(W)$, then the probability of a block error approaches 0 as $N$ grows. This makes sense intuitively: the channels $\chan{i}{N}$ we are using to transmit data are nearly perfect, which means that $Z(\chan{i}{N})\approx 0$. In other words, these channels are highly reliable so the frequency of errors will be relatively small.
Second, Arikan demonstrates a recursive technique for finding the conditional probabilities $\chan{i}{N}$ in terms of $\chan{j}{N/2}$, which means we can evaluate $h_i$ with complexity $O(\log N)$. Therefore, the complexity of the decoding operation is $O(N\log N)$, matching the complexity of the encoding operation.
The complexity of the encoding and decoding operations means that they can be practically implemented. So now we find ourselves with a practical means to transmit data at capacity $I(W)$ and with an arbitrarily small error rate. We have therefore constructed an encoding that is optimal according to the Fundamental Theorem of Noisy Channels.
Arikan's introduction of polar codes motivated a great deal of further work that brought the theory to a point where it could be usefully implemented within the new 5G standard. One obstacle to overcome was that of simply constructing the codes by determining which channels should be frozen and which should transmit data.
In our example above, we considered a specific channel, a binary erasure channel $W = BSC(\ep)$, for which it is relatively easy to explicitly compute the capacity and reliability of the polarized channels $\chan{i}{N}$. This is an exception, however, as there is not an efficient method for finding these measures for a general channel $W$. Fortunately, later work by Tal and Vardy provided techniques to estimate the reliability of the polarized channels in an efficient way.
Additional effort went into improving the decoding operation, which was, in practice, too slow and error prone to be effective. With this hurdle overcome, polar codes have now been adopted into the 5G framework, only 10 years after their original introduction.
Claude Shannon and Warren Weaver. The Mathematical Theory of Communication. University of Illinois Press. 1964.
Erdal Arikan. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Transactions on Information Theory. 55 (2009): 3051-3073.
Erdal Arikan. Polar Coding Tutorial. Lecture notes from the Simons Institute. 2015.
Ido Tal and Alexander Vardy. How to construct polar codes. IEEE Transactions on Information Theory. 59 (2013): 6562-6582.
Valerio Bioglio, Carlo Condo, and Ingmar Land. Design of Polar Codes in 5G New Radio. IEEE Communications Surveys & Tutorials. 2020.
Mathematics and the Family Tree of SARS-Cov-2
Tagged: relationship trees, SARS
It's a remarkable process and, if it weren't so dangerous to humanity, would be deserving of admiration. …
There are two major ways in which mathematics has contributed to our understanding of the disease CoVid-19 and the coronavirus SARS-Cov-2 that causes it. One that is very prominent is through mathematical modelling, which attempts to predict the development of the epidemic in various circumstances. In earlier Feature Columns I wrote about modelling measles epidemics, and much of what I wrote there remains valid in the new environment. With the appearance of the extremely dangerous CoVid-19, mathematical modelling has become even more important, but the complexity of this work has also become apparent, and maybe not suitable for discussion here.
Another relevant application of mathematics attempts to track the evolution of the virus responsible. The core of every virus is a small string of replicating genetic material, either DNA or RNA, which sits enclosed in a structure of some kind. The structure serves to keep the virus in one piece, but also, and most crucially, facilitates the insertion of the genetic material into target cells. (It is in this respect that SARS-Cov-2 outdoes SARS-Cov-1.) The genetic material then takes over the machinery of the invaded cell in order to reproduce itself. It's a remarkable process and, if it weren't so dangerous to humanity, would be deserving of admiration.
In larger animals, the basic genetic material amounts to double helices made up of two complementary strands of nucleotides—i.e., made up of the four nucleotides A (adenine), G (guanine), C (cytosine), and T (thymine). In the virus that produces CoVid-19 the genetic material is RNA, which in the larger cells plays little role in reproduction, but is involved in translating genetic instructions to the construction of proteins. It is a single strand of about 30,000 nucleotides, in which T is replaced by U (uracil). I'll ignore this distinction.
There is some variation in these strands among different populations of viruses, because strands that differ slightly can often still work well. The variation is caused by random mutations, which substitute one nucleotide for another, and a few other possibly damaging events such as deletions and insertions of genetic material. I do not know what, exactly, causes these, but some viruses are much more prone than others to mutate. For example, what makes the common influenza viruses so difficult to control is that they mutate rapidly, particularly in ways that make it more difficult to recognize them by antibodies from one year to the next. It's easy to believe that this "strategy" is itself the product of natural selection.
The good news is that coronaviruses do not apparently mutate rapidly. They do mutate enough, however, that RNA sequences of viruses from different parts of the world may be interpreted as a kind of geographical fingerprint. Keeping track of these different populations is important in understanding the spread of the disease. Similar techniques are also used to try to understand how SARS-Cov-2 originated in wild animal populations.
Where's the mathematics?
I'll try to give some idea of how things work by looking at a very small imaginary animal, with just four nucleotides in its genome. Every generation, it divides in two. In the course of time each of the progeny can suffer mutations at one site of the genome, and eventually each of these in turn divides. The process can be pictured in a structure that mathematicians call a tree. In the following example, which displays the process through three generations, time is the vertical axis. The animal starts off with genome ACGT. The colored edges mark mutations.
A structure like this is called a rooted tree by mathematicians and biologists. It is a special kind of graph.
And now I can formulate the basic problem of phylogenetics: Suppose we are given just the genetics of the current population. Can we deduce the likely history from only these data? The answer is that we cannot, but that there are tools that will at least approximate the history. They are all based on constructing a graph much like the one above in which genomes that are similar to each other are located close to one another. But how to measure similarity? How to translate a measure of similarity to a graph? You can already see one difficulty in the example above. The genome ACCT occurs in three of the final product, but although two of these are related in the sense that they are common descendants and therefore tied to the history, one is the consequence of independent mutations. It is difficult to see how this could be captured by any analysis of the final population.
It is this problem that the website NextStrain is devoted to. It is most distinguished for its wonderful graphical interpretation of the evolution of different populations of SARS-Cov-2. But, as far as I can see, it tells you next to nothing about underlying theory.
In the rest of this column, I'll try to give some rough idea of how things go. But before I continue, I want to come back to an earlier remark, and modify the basic problem a bit. I am no longer going to attempt to reconstruct the exact history. In reality, this is never the right question, anyway—for one thing, no process is a simple as the one described above. Reproduction is irregular, and one generation does not occupy steps of a fixed amount of time. Some organisms simply die without reproduction. Instead, we are going to interpret a tree as a graphical way to describe similarity, with the point being that the more similar two genomes are, the closer they are likely to be in our tree, and the more recent was their common ancestor. In particular, not all branches in our tree will be of the same length. The tree
illustrates only that A and B are more closely related than A and C or B and C. The actual lengths of edges in a tree will not be important, or even their orientation, as long as we know which node is the root. It is just its topological structure that matters.
Listing trees
All the phylogenetic methods I know of deal with trees that are rooted and binary. This means that (i) they all start off from a single node at the bottom and go up; (ii) whenever they branch, they do so in pairs. The endpoints of branches are called (naturally) leaves. We shall also be concerned with labeled trees in which a label of some kind is attached to every leaf. The labels we are really interested in are genomes, but if we number the genomes we might as well use integers as labels.
In order to understand some of the methods used to assemble relationship trees, we should know how to classify rooted binary trees (both unlabeled and labeled) with a given number of leaves.
There are two types of classification involved. First one lists all rooted binary trees of a given shape, and then one looks at how to label them. For a small number of leaves both these steps are very simple, if slightly tedious.
Two leaves. They all look like the figure on the left:
And there is essentially one way to label them, shown on the right. What do I mean, "essentially"? I'll call two trees essentially the same if there exists a transformation of the one of them into the other that takes edges to edges and the root to the root. I'll call two labelings the same if such a transformation changes one into the other. In the tree above the transformation just swaps branches.
Three leaves. There is only essentially one unlabeled tree. To start with, we plant a root. Going up from this are two branches. One of them must be a leaf, and the other must branch off into two leaves. So the tree looks essentially like this:
As for labelings, there are three. One must first choose a label for the isolated leaf, then the others are `essentially' determined:
(There are three other possible labelings, but each yields a tree essentially the same as one of those above.)
Four leaves. There are two types of unlabeled trees, one with branching to an isolated leaf and an unlabeled tree of three leaves, the other with two branches of two leaves each.
There are 15 labelings. There is a systematic way to list all rooted trees with $n$ leaves, given a list of all those with $n-1$, but I won't say anything about it, other than to hint at it in the following diagram:
I will say, however, that there are 105 labeled trees with 5 leaves, and that in general, there are $1\cdot 3 \cdot \, \ldots \, \cdot (2n-3)$ labeled trees with $n$ leaves. (I refer for this to Chapter 2 of the lecture notes by Allman and Rhodes.) This number grows extremely rapidly. It is feasible to make lists for small values of $n$, but not large ones. This impossibility plays a role in reconstructing the phylogeny of a large set of viruses.
Reconstructing the entire history in practical cases is for all practical purposes impossible. What one attempts is just to find an approximation to it. The structure of the candidate graph should reflect at least how close the items are to each other. As explained in the lecture notes by Allman and Rhodes, there are many different ways to measure closeness, and many different techniques of reconstruction. None is perfect, and indeed there is no perfect method possible.
The techniques come in two basic flavours. Some construct a candidate tree directly from the data, while others search through a set of trees looking for the one that is best, according to various criteria. The ones that construct the tree directly don't seem to do well, and the methods most often used examine a lot of trees to find good ones. I'll look at the one that Allman and Rhodes call parsimony. It is not actually very good, but it is easy to explain, and gives some idea of what happens in general. It tries to find the tree that is optimal in a very simple sense—it minimizes, roughly, the total number of mutations in terms of the connections of the tree. (The term, like many in modern biology, has taken on a technical meaning close to, but not quite the same as, that in common usage. According to Wikipedia, parsimony refers to the quality of economy or frugality in the use of resources.)
In this method, one scans through a number of rooted trees whose labels are the given genomes, and for each one determines a quantitative measure (which I'll call its defect) of how closely the genomes relate to each other. One then picks out that tree with the minimum defect as the one we want. In fact, in this method as in all of this type there may be several, more or less equivalent.
How is the defect of a candidate labeled tree computed? I'll illustrate how this goes for only one set of genomes and one tree:
Step 1. The defect of the tree is the sum of defects of each of its nucleotides (in our case, four). This is done by calculating at every node of the tree a certain subset of nucleotides and a certain defect, according to this rule: (i) if the node is a leaf, assign it just the nucleotide in question, and defect 0. Progress from leaves down. At an internal node whose branches have already been dealt with, assign a subset — either (i) the intersection of the subsets just above it if it is non-empty, or (ii) otherwise, their union. In the first case, assign as defect the sum of defects just above, but in the second assign the sum plus 1. These four diagrams show what happens for our example.
The total defect for this labelled tree is therefore 4.
Step 2. In principle, you now want to scan through all the trees with $n$ leaves, and for each one, calculate its defect. There will be one or more trees with minimal defect, and these will be the candidates for the phylogeny. In practice, there will be too many trees to look at all, so you examine a small collection of sample trees and then look at a few of their neighbours to see if you can make the defect go down. The trick here, as with many similar methods is to choose samples well. I refer to Chapter 9 of Allman-Rhodes
Dealing with SARS-Cov-2
You can test a method for reconstructing an evolution by running some evolutionary processes and then applying the method to see how it compares. In practice, the structure parsimony produces are not generally satisfactory. In practice, it is the method "Maximum Likelihood" used most often. Allman-Rhodes say something about that method, too. As with every one of the methods so far developed, there are both theoretical and practical difficulties with it. But in most situations its results seem to be satisfactory.
With SARS-Cov-2, there are at least very satisfactory sources of data — GenBank and GISAID. I say something about these in the reference list to follow. Both of these websites offer more than thousands of genomes for SARS-Cov-2 as well as other organisms. These have been contributed by medical researchers from all over the world, and they are well documented.
Elizabeth Allman and John Rhodes, The mathematics of phylogenetics.
These are lecture notes from the 2005 Park City Mathematics Institute. The exposition is relatively clear, in the middle ground between mathematics and biology. They say something about nearly all techniques of mathematical phylogenesis.
Aligning sequence reads to assemble the genome puzzle.
An earlier column involving mathematics and genome sequences. When this was written, it was not yet easy to analyze the nucleotide sequences of genetic material. The first step was to analyze short segments of the genome, and then figure out how to piece these together. This has by now become a mundane task.
Building trees for biologists.
An earlier column on some applications of graphs to biology.
The second edition of James Watson's Molecular Biology of the Gene.
Chapter 15 is a lucid (and terrifying) account of what was known at the time it was written about what a virus does inside a victim. (I have not had access to more recent editions.)
The first published genetic analysis of SARS-Cov-2
Parsimony
Nextstrain is a deservedly famous project that tracks, among other things, the evolution of strains of SARS-Cov-2 with impressive graphics. It allows the user to examine developments interactively, and includes also a collection of tools to carry out your own analysis (i.e. DIY genetics). However, as far as I can see it is weak in explaining the theory behind their tools.
NextStrain home page
Nextstrain on CoVid-19
An introduction to NextStrain
National Geographic article on NextStrain
Open policy, very convenient, but less complete than GISAID (below).
A catalogue of genomes (1717 of them at the moment I write this).
Including the reference sequence from Wuhan.
SARS-Cov-2 genomes in a different format.
This site allows you to choose genomes and build phylogenetic trees interactively!
Genomes by T. A. Brown. The entire second edition of this textbook is posted on the NIH site. Links from this page are also of interest. Particularly relevant are:
Understanding a genome sequence, Chapter 7.
Molecular phylogenetics, Chapter 16.
GenBank's guide to genome data
GISAID
This is the most complete archive of SARS-Cov-2 genomes, but with somewhat restrictive rules about publishing the data found there. It requires registration for access. This is not surprising—there are potentially an enormous number of fortunes to be made.
University of Washington software list
To a mathematician, the sheer volume of publications on medical topics is astonishing. I list this site to illustrate this even for the relatively arcane topic of phylogeny. Incidentally, the original reason for developing phylogenetic tools was to track the evolutionary development of life on Earth, which was a matter of interest even before Darwin's theory of evolution appeared. For many years, it was based on morphological aspects rather than genomes. But now that gene sequencing is so simple, it has also become an important part of medical investigation.
Points, Lines, and Incidences
On: April 1, 2020
Tagged: Desargues, Euclidean geometry, Pappus's Theorem
To honor Mathematics and Statistics Awareness Month, I will treat some remarkable developments concerning a basic intuitive idea in geometry concerning points and lines. …
When people think about mathematics they think of numbers and shapes. Numbers covers the interest of mathematics in arithmetic and algebra. And shape relates to mathematics' concern with geometry. However, in many ways a better way to capture the essence of mathematics is to say that it is concerned with understanding patterns of all kinds, whether they be numerical or geometrical patterns or patterns in more general senses.
To honor Mathematics and Statistics Awareness Month, I will treat some remarkable developments concerning a basic intuitive idea in geometry concerning points and lines. For our discussion I will be concerned with points, lines and segments (portions of lines between two points) that are drawn in the Euclidean plane in contrast to points and lines on another surface such as a sphere or ellipsoid or in the hyperbolic (many parallel lines) or projective planes (no parallel lines). Many of these ideas have ancient roots in many cultures, but often mathematics reinvents itself by trying to understand ideas that at first glance might seem to have been resolved in the distant past.
Configurations of points and lines
Consider the diagram in Figure 1 that shows two triangles being viewed by an eye. While this diagram has infinitely many points on the lines, we will only be interested in the points accentuated with dots. In this diagram there are many segments but some of these lie along the same line. In drawings of this kind we can show only a part of a line. Note that the corresponding vertices of the triangles determine three lines: AA', BB' and CC' that meet at the point called Eye. Note also that we will consider EyeA', EyeA and AA' as three different segments even though EyeA' is made up of two smaller (shorter) segments.
Figure 1 (The vertices of two triangles viewed from the point named Eye.)
If you count the number of points in Figure 1, there are 7 and the number of lines is 9. Three of the lines have three points on them and the other 6 lines have two points on them. Note that while the segment EyeC and the segment AB meet at a point, we will not be concerned with that point or the one where segment A'B' meets segment CC'. We need to decide how to report the information we see. The segment EyeA' consists of the segments EyeA and AA' which make up the longer segment but the points Eye, A and A' all are points on one line.
The set of labeled points shown in Figure 1 accounts for some two-point lines and some of the points lie on 3-point lines, but note that the diagram does not show all of the lines that the points in the set of points (7 points) might determine. If 7 points were in "general position" they would determine 21 (= (7*6)/2 ) different lines. You are probably aware that any two distinct points in the Euclidean plane determine a unique line. However, if one starts, for example, with 4 points on a line, then any pair of these points determines the same line.
Figure 1 is the starting point for a famous theorem that I will return to in a moment, but to set the stage for what is to come let us also look at the points and lines in Figure 2. This diagram calls attention to 7 points also, but with only 3 lines. One line has 2 points, another 3 points and the third line has 4 points. Five points are shown with only a single line passing through those points and two points have two lines which pass through them. However, perhaps you are wondering if the two "vertical" lines actually intersect (meet) somewhere in the distance. Many people would say the diagram suggests that the lines are parallel, that is, that they do not have a point in common.
Figure 2 (A collection of 7 points and three lines.)
Though in many ways the result I am about to discuss belongs more naturally to projective geometry (where there are no parallel lines, all pairs of distinct lines meet in exactly one point) than Euclidean geometry, it is a theorem as stated for the Euclidean plane.
There is much more that can be said about the diagram that we started with in Figure 1. Figure 3 shows another diagram that helps illustrate the mathematical result involved. The point labeled "Eye" in Figure 1 corresponds to the point labeled O in Figure 2. Furthermore the two triangles in Figure 1 did not overlap but they do in Figure 2. What happens (Figure 3) when we look at the lines determined by the corresponding vertices of the two triangles, the lines determined by AA', by BB' and by CC'? Of course it might be possible that some pair of these three lines would be parallel and would not meet. However, let us assume that in fact every pair of these lines do intersect, and they intersect in the points P, Q and R. (P and Q are shown in Figure 2 but not R.) Can anything be said about this triple of points? Ordinarily if one picks three points at "random," one would not expect them to lie on a line but one would expect them to be the vertices of a triangle. However, the remarkable fact here is that when the two initial triangles have the property that the lines connecting their corresponding vertices meet at a point, the corresponding edges of the triangles (when no pair is parallel) meet at points which lie on a single line!!
Figure 3 (A Desargues configuration but not showing the point where AB and A'B' intersect at R, which would be on the line with P and Q.)
This theorem was not noticed by the great Greek geometers but was discovered by Girard Desargues (1591-1661). It is an example of a situation in mathematics where something that one would not expect to happen, as if by magic, does happen. This ability of mathematics to make surprising connections in unexpected situations is part of what attracts so many people to the subject. For those not sure that Figure 3 might just be an accident, you can take a look at another copy of a Desargues configuration (Figure 4) highlighting the two triangles in color and–indicated in red–the point that they are in "perspective" (where the blue lines intersect), and the line that they are in "perspective" from.
Figure 4 (Another completed Desargues configuration showing 10 points and lines. Courtesy of Wikipedia.)
Here is one formal statement of Desargues's Theorem:
If the corresponding vertices of two distinct triangles in the Euclidean plane ABC and abc pass through a single point O then the corresponding sides of these triangles intersect at points which lie on a single line.
Sometimes this theorem is stated that if two triangles are "in perspective" from a point they are "in perspective" from a line.
Theorems like Desargues's Theorem have come to be known as configuration theorems; there are other theorems of this kind that predate Desargues.
In addition to the "configuration" consisting of points and lines that is represented by the 10 points and 10 lines implicit in Figure 4, there is another amazing configuration theorem which is due to the great mathematician Pappus of Alexandria (lived approximately 290-350).
Figure 5 (A diagram illustrating Pappus's Theorem.)
Pappus's Theorem can be stated that given six points, A, B, C on one line and a, b, c on another line which intersect (none of the points being where the two lines intersect) (as shown in Figure 5) then the three points where Ab intersects aB, Ac intersects aC and Bc intersects bC are collinear (lie on a single line)!
Whereas Desargues's Theorem can be thought of as 10 points and 10 lines with 3 points on a line and 3 lines passing through a point, the Pappus configuration can be thought of as having 9 points and 9 lines where three points lie on each of the specified lines and triples of lines pass through the specified points.
Pappus's Theorem can be considered as a special case of another remarkable configuration theorem that was discovered by Blaise Pascal (1623-1662), the French philosopher, author and mathematician.
Figure 6 (Portrait of Blaise Pascal.)
Pappus's Theorem involves 6 points which lie on two lines and the way these points determine lines that create points which lie on a line as well. But since two lines are both equations of degree 1 (equations of the form $ax + by + c = 0$ where $a, b,$ and $c$ are real numbers, with both of $a$ and $b$ not zero), if we multiply these two equations together we get an equation which is of degree two in the variables $x$ and $y$. This would be a quadratic equation in two variables. The "shapes" you are probably most familiar with that satisfy such equations are the conic sections: circles, ellipses, parabolas, and hyperbolas. But two intersecting straight lines or two parallel straight lines can be thought of as "degenerate" conic sections. (A conic section gets its name because it can be thought of as what results from cutting a cone with a plane.) What Pascal realized was that one could generalize Pappus's Theorem to planar conics.
Pascal's Theorem:
Given 6 points which form a hexagon whose vertices lie on a conic section, then the three pairs of opposite sides of this hexagon meet at points which lie on a straight line.
Figure 7 (A diagram illustrating Pascal's Theorem, involving 6 points inscribed in a conic section, here taken to be an ellipse. Diagram courtesy of Wikipedia.)
The rise of interest in configuration theorems such as those of Pappus, Desargues, and Pascal were related to attempts to put an understanding of geometry in a richer context. Attempts to show that Euclid's 5th Postulate, in a modern formulation:
If p is a point not on line l then there is a unique parallel to line l though point P
could be deduced from the other axioms (starting assumptions) were not successful. We know today why they weren't: there are mathematically (logically consistent) geometries where Euclid's 5th postulate fails to hold. Furthermore, people were investigating point/line questions on the sphere, a curved surface rather than a flat one, but where there were "natural" suggestions for what might be "lines" and points on such a surface. In the end, spherical geometry evolved into two different approaches. In the more interesting approach a point was considered to be a pair of points which were at opposite ends of a diameter. It may seem strange to think of two things being identified so that they are considered one, but this point of view is very fruitful because now "two points" determine a unique line, a great circle of the sphere, which is a circle centered at the center of the sphere.
Many people who know some geometry are puzzled by the difference between longitude lines and latitude lines. Longitude lines are great circles which cut through the idealized sphere that represents the Earth at the south and north poles. The circle that is "halfway" between the poles, lying in a plane perpendicular to the line joining the poles, is a great circle too, and it is the equator of the system. But all the other latitude lines that are "parallel" to the equator (in the sense that they don't meet the equator) are not great circles. They are circles which represent the intersection of a plane parallel to the equator, with the sphere.
Incidences
Suppose we are given a "configuration" C of points and lines in the Euclidean plane; we have a finite set P containing $m$ points in the Euclidean plane and a finite set L of $n$ lines which contain some of the points of P. It is possible that the lines of L intersect at points not in P (see Figure 8) and it is possible that L does not contain all of the lines determined by pairs of points in P (Figure 2). One might be interested in counting how many point-line pairs there are in the configuration C. A point-line pair, a point $p$ of set S which is on line $l$ of L is called an incidence. We are not concerned with lengths of segments or with angles between the lines.
The point-line configuration in Figure 8 has 7 straight lines (so set L has 7 elements) and 5 points (so set P has 5 elements). Note that the lines are displayed to suggest they are infinite. The section between two points will be referred to as a segment and when we have in mind the "whole" line that a segment lies in we will call that a line. Some pairs of the points shown do not lie on any line of L. One might consider adding lines between points not already joined, and adding to our configurations points that are obtained from the intersection of such a line with an existing line (assuming that it meets an existing line in a point that is not already in S). However, in what follows we are only concerned with the points and lines in the set of points P given to us and the set of lines L given to us. For Figure 8, of the 5 points (dark dots), one has 2 lines going through it, three points have 3 lines going through them, and one point has 4 lines going through it. For the lines, 6 lines have two points on them and one line has 3 points on it.
Figure 8 (An irregular configuration of points and lines starting with a point set P with 5 elements and a line set L with 7 elements.)
For practice you may want by direct count to verify that this configuration has 15 point-line incidences and 10 different segments. (The line with three points has 3 segments in all.) You can count the number of points, lines, segments, number of lines with s points and number of points through which t lines pass and incidences you see in Figure 8. You should find 15 incidences, and notice you can count these by concentrating on each line in turn and finding how many points there are on each of the lines or by concentrating on the point and seeing how many lines pass through each of the points! Given the diagram shown in Figure 2 where we have 5 points in the set P and 3 lines in the set L, we can count 9 incidences in this diagram.
Shortly we will discuss a remarkable theorem about counting incidences–the Szemerédi-Trotter Theorem ). But to fully appreciate the circle of ideas involved in this result I will discuss some ideas that seem initially not related to incidences. When most people hear the word graph they think of the graph of a function. When Descartes extended the "synthetic" geometry of Euclid to the analytical geometry of today, it became possible to graph algebraic expressions. Historically this helped with the rapid and fruitful development of the Calculus.
However, there is another more recent use of the word graph. It belongs to the domain of discrete mathematics rather than the domain of continuous mathematics and involves the geometric structures consisting of dots and lines or dots and curves. Graphs of this kind consist of a number of points called vertices (singular vertex) and lines/curves which join up pairs of vertices which are called edges. Figure 9 shows a sample of some graphs.
Figure 9 (A sample of three different graphs, each with one connected piece.)
Each of these three graphs consists of a single piece, and such a graph is called connected–intuitively there is a way of taking any pair of vertices in the graph and getting between them by moving along the edges of the graph. In principle we might have many different lines/curves which connect up a pair of vertices, and we might have vertices that are joined to themselves by a curve (or broken line) but these structures, sometimes called multi-graphs, pseudographs, or graphs with loops and multiple edges will not be considered here. Some graphs have the property that they have been drawn in the plane so that edges meet only at vertices. Such graphs are called plane graphs, and graphs which have the potential to be drawn in the plane so that edges meet only at vertices are known as planar graphs. Figure 10 shows an example of a planar graph on the left and a structurally equivalent graph, an isomorphic graph, on the right; the single crossing has been eliminated.
Figure 10 (A graph (left) drawn in the plane showing one crossing. This graph is planar because it can be redrawn as shown on the right to get the plane graph with the one crossing eliminated.)
While graph theory is a relatively recent subarea of geometry, a relatively recent subarea of graph theory, with mathematical and computer science implications, has been the subject of graph drawing. This domain of ideas concerns questions about the nature of the appearance of different ways a graph might be drawn in the plane.
When a graph is drawn in the plane there may be many different drawings to represent isomorphic copies of the same graph. In particular one might ask for a drawing with the minimum number of crossings for that graph. This number is called the crossing number of the graph. In fact, the crossing number in itself has interesting variants because there can be a difference in the number of crossings when all of the edges are drawn with straight lines versus the number of crossings when the edges are allowed to be curves.
The Szemerédi-Trotter Theorem
A remarkable accomplishment in understanding the behavior of counting the number of incidences for a collection of points and lines was achieved by what has come to be called the Szemerédi-Trotter Theorem. This result is named after William Tom Trotter and Endre Szemerédi.
Figure 11 (Photos of Endre Szemerédi and William (Tom) Trotter. Courtesy of Wikipedia and Tom Trotter respectively.)
We will first set the stage for the remarkable result of Szemerédi and Trotter, which was proved in 1983.
Szemerédi-Trotter Theorem:
Suppose that we have a collection (set) of points P with m distinct points and a collection of distinct lines L with n lines in the Euclidean plane. As we saw above we will call an incidence a point-line pair (p,l) where p belongs to P and l belongs to L and point p lies on line l. Such pairs are called incidences. What is the maximum number of possible incidences, calculated in terms of the values of m and n? If we denote by I(P,L) the number of incidences for the particular given sets P and L, then we will denote by I(m,n) the largest value of I(P,L) as for any choice of P with m elements and any choice of L with n elements. Notice that in the statement of the theorem below, m and n appear in a symmetrical way. This suggests that perhaps one can get insights into points and lines when we try interchanging their roles. Such a "duality" of points and lines holds fully in projective geometry, where any pair of lines always intersects.
Theorem (Szemerédi-Trotter, 1983):
$$I(m, n) = O\left(m^{2/3}n^{2/3} + m + n\right)$$
Informally, this theorem gives the order of magnitude that is possible for the largest number of incidences that can occur for a collection of points P with $m$ points and a collection of lines L with $n$ lines.
It is possible to show that this result is "best possible" but for the fact that the exact constant that holds for the expression above is not yet known (see below).
Before looking in more detail about the content of this result here are a few remarks about "big O" notation used in the description of the Szemerédi-Trotter theorem. The notation has a story of its own. It was originated by the German mathematician Edmund Landau (1877-1938) who is most famous as a number theorist.
Figure 12 (Photo of Edmund Landau)
In recent times big O notation is often discussed in the context of trying to determine how long it will take to run (how many steps are required) a computer algorithm in terms of some parameter of the size of the problem, typically denoted with the letter $n$. However, Landau was interested in the more general issue of comparing the growth of different functions expressed using the variable, say $n$. While sometimes in the popular press it is stated that some phenomenon is showing exponential growth, to some extent what is meant is that the phenomenon in the short run is growing faster than other functions that people may have studied in a mathematics class, like $n^2$.
The "formal" definition of "$g(x)$ is $O(f(x))$" is that $g$ obeys this property: if $x$ is larger than some fixed integer N, there is a constant C such that:
$$g(x) \leq C(f(x)) \textrm{ for } x \geq N $$
This definition says nothing about how large C or N might be. For some results of this type C and N can be huge and, thus, the result is sometimes not of "practical" computational interest.
The use of big O notation created interest in other ways to notate growth of functions or number of steps in running an algorithm. While notation may not seem an important part of the practice of mathematics, it is part of the craft by which new mathematical ideas are born and new theorems are developed. So you might want to look at the meaning of small o and big omega as notations for grown of functions.
Now that we know the meaning of big O notation we can see that the Szemerédi-Trotter Theorem says that for large enough values of $m$ and $n$, the maximum number of incidences is less than some constant times:
$$m^{2/3}n^{2/3} + m + n.$$
There is another way that the Szemerédi-Trotter Theorem can be stated that gives equivalent information but has a seemingly different flavor. Suppose we are given $m$ points in the plane and a positive integer $i \geq 2$, then the number of lines which contain at least $i$ of these points is given by:
$$O(m^2/i^3 + m/i).$$
In retrospect, it is common that the first proofs of important results are not always as insightful and as "clear" as later proofs. The original approach that Szemerédi and Trotter used to prove their result is not usually discussed because proofs that came after theirs were noticeably simpler. There are two simpler approaches to the proof that are now widely reported as approaches to this theorem. One approach exploited the use of configurations that involved a grid of points, an approach associated with the work of György Elekes(1949-2008), who died sadly at a relatively young age, as well as work due to Laszlo Szekely, which is hinted at below that involves the notion of crossing number.
One tool that has increasingly emerged as useful in discrete geometry in putting to work geometrical insights is graph theory. As we saw, graph theory concerns diagrams such as that in Figure 11.
Figure 13 (Two graphs, which require some crossings when drawn in the plane, sometimes called the Kuratowski graphs, that serve as "obstacles" to drawing a graph in the plane. Any graph which can't be drawn in the plane without having edges that cross at points other than vertices must in a certain sense be a "copy" of one of these two graphs (or both) within it.)
Figure 14 Kazimierz Kuratowski (1896-1980)
Both of the graphs shown in Figure 13 can be regarded as a collection of points in the Euclidean plane. Note that we could have joined some of the vertices with curves to avoid the crossings that occur in the diagram but we can also interpret this diagram as a "geometric graph" where we have lines which include the points where the lines cross at what are not vertices of the original graph. One approach to connecting graph theory to questions about point and line incidences is forming a graph from the diagram representing a point set P and a line set L by constructing a graph which consists of the vertices that correspond to the points in P and joining two vertices $u$ and $v$ of this graph by edge if there is a line segment in the point/line diagram between $u$ and $v$. This leads to a graph that may be isomorphic to a plane graph, one that can be drawn in the plane without any edges meeting or crossing at points other than vertices, or may be a non-planar graph, but will typically have edges meeting at points which are called crossings–points that are not vertices of the graph involved.
Given a graph, there are many properties about the graph which one can inquire about. Thus, one can ask if a graph is planar, has a simple circuit tour that visits each vertex once and only once in a simple circuit (useful in designing a package delivery routes), has a tour of its edges that visits each edge once and only once (useful in designing snowplowing routes), etc. A natural question to ask is for the different ways to draw a graph in the plane with straight line edges (e.g. the different drawings are isomorphic), what are the smallest number and largest number of crossings of edges that can occur?
Given a graph G, cr(G) is often defined as the minimum number of crossings of edges not at vertices in any drawing of the graph G. Both graphs in Figure 13 can be redrawn with 1 crossing and, thus, have crossing number 1. The first graph shown has 5 crossings in the drawing shown and if edges are not allowed to intersect themselves or cross other edges more than once, this drawing has 5 crossings, which is the largest number of crossings. Depending on what "concepts" one is trying to understand one often puts rules on the allowed drawings. The bottom graph (Figure 13) shows three edges that pass through a single point; for some purposes such a drawing would not be allowed and one is required to use drawings where at most two edges meet at a crossing.
Given a particular graph G, to decide for any integer $c$ if the crossing number $\textrm{cr}(G) \leq c$ is known to be NP-complete. This means that it is very unlikely that there is a polynomial time algorithm for determining the solution to this question. (There are hundreds of NP-complete problems and if any one of them could be solved in polynomial time, then it would be possible to solve them all in polynomial time–no one knows!)
Over many years there has been a progression of results concerning getting bounds on the crossing number of a graph. Using the result known as Euler's Formula that for any planar connected (one piece) graph, V + F – E = 2 where V, F and E denote the number of vertices, faces and edges of the graph. It is not difficult to see that from this result the crossing number of a general graph with V vertices (where V is at least 3) and E edges satisfies $$\textrm{cr}(G) \geq E – 3V + 6.$$
However, making various assumptions about the density of the number of edges (how rich in edges the graph is) of a graph one can get more "interesting" bounds on the crossing number.
Theorem:
$$ \textrm{If } E \geq 4V \textrm{ then cr}(G) \geq (1/64)\left(E^3/V^2\right) $$
Various variants of this result are being explored where the "edge density" is changed, in attempts to understand the value of the constant (above 1/64) in the crossing-number expression. Thinking of a configuration of points and lines in the plane as a graph drawn in the plane with crossings allows one to get the expression for incidences in the Szemerédi-Trotter Theorem.
What else can we say about the crossing number of interesting graphs? One particularly interesting family of graphs is the graphs with $n$ vertices, every pair of vertices being joined by exactly one edge–graphs known as complete graphs. Traditionally this graph is denoted $K_n$.
$K_n$ has $(n)(n-1)/2$ edges since each vertex is joined to the all of the other $n-1$ vertices, and each edge will get counted twice, once at each of its endpoints, which gives the result $(n)(n-1)/2$.
Thus, $\textrm{cr}(K_n) \geq (n)(n-1)/2 – 3n + 6,$ which simplifies to $(n^2/2) – (7/2)n + 6$.
It might seem that it would be easy to find the exact value of the crossing number of $K_n$ which is very symmetrical combinatorially, but perhaps rather surprisingly the exact value of this crossing number for every $n$ is not yet known. It has been conjectured to be Z(n):
$$(1/64)(n)(n-2)^2(n-4), \textrm{ when n is even, and}$$
$$(1/64)(n-1)^2(n-3)^2, \textrm{ when n is odd.}$$
You can check that this formula gives the crossing number of the complete graph with 5 vertices to be 1. Various people (Paul Turán and the recently deceased Richard Guy) have been involved with the "proper" guess for the minimum number of crossings of the different types of "complete" graphs drawn in the plane but the "formulas" above are sometimes associated with the name of the Polish mathematician Kazimierz Zarankiewicz. As is the spirit of mathematics, generalizing interesting ideas and concepts, there are now many kinds of crossing numbers that have been explored.
Figure 15 (Kazimierz Zarankiewicz (1902-1959))
The Szemerédi-Trotter Theorem initially was concerned with incidences between points and lines. Over a period of time it resulted in new theorems related to incidences between points and pseudolines (lines are to uncooked spaghetti as pseudolines are to cooked spaghetti), points and circles of the same radius, points and circles with different radii, points on parallel lines, etc. In this line of work the result was generalized from points and lines to points and other collections of geometric sets. But if one can discuss points and lines in 2-dimensional space, why not look at points and lines in higher-dimensional spaces, or lines and planes in higher-dimensional spaces? You should not be surprised that all of these have come to be looked at and much more.
While these are seemingly natural extensions of the ideas related to the Szemerédi-Trotter Theorem there is again the element of surprise in mathematics. The Szemerédi-Trotter Theorem seems rooted in combinatorial geometry–counting things–but it also has metrical, distance-related ties. While Paul Erdős played a part in the history of the Szemerédi-Trotter Theorem (not directly discussed above) he also had a part in the this other thread of results. Given a set of points P in the plane, with different conditions on where the points are arranged with respect to each other and perhaps their pattern on lines that pass through the points, one can measure the Euclidean distance (or other distances) between points in the set P. Now one is interested in studying how few or how many distances can occur. There are many papers about counting distinct distances, repeated distances, etc.!
From little acorns great oaks grow. Incidences may seem like an "acorn" compared with other issues in mathematics but it has been an acorn that has grown impressive results. I hope you enjoy Mathematics and Statistics Awareness Month and try your hand at learning and doing some new mathematics.
Beck, J., On the lattice property of the plane and some problems of Dirac, Motzkin and Erdős in combinatorial geometry, Combinatorica 3 (1983), 281–197.
Brass, P. and W. Moser, J. Pach, Research Problems in Discrete Geometry, Springer, New York, 2005.
Clarkson K. and H. Edelsbrunner, L. Guibas, M. Sharir, E. Welzl, Combinatorial complexity bounds for arrangements of curves and spheres, Discrete Comput. Geom. 5 (1990) 99–160.
Dvir, Z., On the size of Kakeya sets in finite fields. J. Amer. Math Soc. 22 (2009) 1093–1097.
Edelsbrunner, H.: Algorithms in Combinatorial Geometry. Springer- Verlag, Heidelberg (1987)
Elekes, G., On linear combinatorics I, Concurrency—an algebraic approach, Combinatorica 17 (4) (1997), 447–458.
Elekes, G., A combinatorial problem on polynomials, Discrete Comput. Geom. 19 (3) (1998), 383–389.
Elekes, G., Sums versus products in number theory, algebra and Erdős geometry–A survey. in Paul Erdős and his Mathematics II, Bolyai Math. Soc., Stud. 11, Budapest, 241–290 (2002)
Elekes, G., and H. Kaplan, M. Sharir, On lines, joints, and incidences in three dimensions. J. Combinat. Theory, Ser. A. 118 (2011) 962–277
P. Erdős, Problems and results in combinatorial geometry, in Discrete Geometry and Convexity (New York, 1982), vol. 440 of Ann. New York Acad. Sci., 1985, pp. 1–11.
Felsner, S., Geometric Graphs and Arrangements, Vieweg, Wiesbaden, 2004.
Grünbaum, B., Configurations of Points and LInes, Graduate Studies in Mathematics Volume 103, American Mathematical Society, Providence, 2009.
Guth, L. and N. Katz, Algebraic methods in discrete analogs of the Kakeya problem. Advances Math. 225, 2828–2839 (2010)
Guth, L. and N. Katz, On the Erdős distinct distances problem in the plane. Annals Math. 181, 155–190 (2015), and in arXiv:1011.4105.
Kaplan, H., and M. Sharir, E. Shustin, On lines and joints, Discrete Comput. Geom. 44 (2010) 838–843.
Kollar, J., Szemerédi-Trotter-type theorems in dimension 3, Adv. Math. 271 (2015), 30–61.
Matousek, J., Lectures in Discrete Geometry, Springer, New York, 2002.
Pach, J. (ed), Towards a Theory of Geometric Graphs, Contemporary Mathematics 342, American Mathematical Society, Providence, 2004.
Pach, J., and P. Agarwal, Combinatorial Geometry, Wiley, New York, 1995.
Pach, J., and R. Radoicic, G. Tardos, and G. Toth, Improving the Crossing Lemma by ?nding more crossings in sparse graphs, Discrete Comput. Geom. 36 (2006) 527–552.
Pach, J. and M. Sharir, Repeated angles in the plane and related problems, J. Comb. Theory, Ser. A 59 (1992) 12–22.
Pach, J. and M. Sharir, On the number of incidences between points and curves, Combinatorics, Probability and Computing (1998), 121–127.
Pach, J. and M. Sharir, Geometric incidences, pp. 185–224, in Towards a theory of geometric graphs, vol. 342 of Contemporary Mathematics, AMS, Providence, RI, 2004.
Pach, J. and G. Toth, Graphs drawn with few crossings per edge, Combinatorica 17 (1997) 427–439.
Quilodran, R., The joints problem in Rn . Siam J. Discrete Math. 23 (2010) 2211–2213
Sharir, M. and E. Welzl, Point-line incidences in space. Combinat.
Prob., Comput., 13 (2004) 203–220
Sheffer, A., Distinct Distances: Open Problems and Current Bounds
https://arxiv.org/abs/1406.1949
Solymosi, J., On the number of sums and products, Bulletin of the London Mathematical Society, 37 (2005) 491–494
Solymosi, J. and Tao, T.: An incidence theorem in higher dimensions. Discrete Comput. Geom. 48 (2012) 255–280
Szekely, L., Crossing numbers and hard Erdős problems in discrete geometry, Combinatorics, Probability and Computing 6 (1997) 353–358.
Szemerédi, E., and W. T. Trotter, Extremal problems in discrete geometry, Combinatorica 3 (1983) 381-392.
Tomassia, R. (ed.), Handbook of Graph Drawing and Visualization, CRC Press, Boca Raton, 2014.
Tomassia, R. and I. Tollis (eds.), Graph Drawing, Lecture Notes in Computer Science 894, Springer, New York, 1995.
Toth, C., The Szemerédi–Trotter theorem in the complex plane, Combinatorica, 35 (2015) 95–126
Zahl, J.. and A Szemerédi–Trotter type theorem in R4, Discrete Comput.
Geom. 54, 513–572 (2015)
From Strings to Mirrors
Tagged: Higgs boson, mirror symmetry, string theory, T-duality
To tell you where mirror symmetry came from, I have to tell you about string theory. And to do that, I have to tell you why you should care about string theory in the first place. That story starts with an old question: what is the smallest piece of the universe? …
Scientists often use mathematical ideas to make discoveries. The area of research mathematics known as mirror symmetry does the reverse: it uses ideas from theoretical physics to create mathematical discoveries, linking apparently unconnected areas of pure mathematics.
Maxim Kontsevich has won many recognitions, including the Fields Medal, for his research in mirror symmetry. (Photo by Mathematisches Institut Oberwolfach, CC BY-SA 2.0 DE.)
To tell you where mirror symmetry came from, I have to tell you about string theory. And to do that, I have to tell you why you should care about string theory in the first place. That story starts with an old question: what is the smallest piece of the universe?
Here's a rapid summary of a couple of thousand years of answers to this question in the West. The ancient Greeks theorized that there were four elements (earth, air, fire, water), which combined in different ways to create the different types of matter that we see around us. Later, alchemists discovered that recombination was not so easy: though many chemicals can be mixed to create other chemicals, there was no way to mix other substances and create gold. Eventually, scientists (now calling themselves chemists) decided that gold was itself an element, that is, a collection of indivisible components of matter called gold atoms. Continued scientific experimentation prompted longer and longer lists of elements. By arranging these elements in a specific way, Dmitri Mendeleev produced a periodic table that captured common properties of the elements and suggested new, yet-to-be discovered ones. Why were there so many different elements? Because (scientists deduced) each atom was composed of smaller pieces: protons, neutrons, and electrons. Different combinations of these sub-atomic particles produced the chemical properties that we ascribe to different elements.
The TRIUMF particle accelerator is used in high-energy physics research.
This story is tidy and satisfying. But there are still some weird things about it: for example, protons and neutrons are huge compared to electrons. Also, experiments around the beginning of the twentieth century suggested that we shouldn't just be looking for components of matter. The electromagnetic energy that makes up light has its own fundamental component, called a photon. The fact that light sometimes acts like a particle, the photon, and sometimes like a wave is one of the many weird things about quantum physics. (The word "quantum" is related to "quantity"—the idea that light might be something we can count!)
Lots and lots of work by lots and lots of physicists trying to understand matter and energy particles, over the course of the twentieth century, produced the "Standard Model." Protons and neutrons are made up of even smaller components, called quarks. Quarks are held together by the strong force, whose particle is a gluon. The weak force, which holds atoms together, has its own force particles. The full Standard Model includes seventeen different fundamental particles.
A zoo of quarks: up, down, charm, strange, top, and bottom (Art by Levi Qışın).
There are two theoretical issues with the Standard Model. One is essentially aesthetic: it's really complicated. Based on their experience with the periodic table, scientists suspect that there should be some underlying principle or structure relating the different types of particles. The second issue is more pressing: there's no gravity particle. Every other force in the universe can be described by one of the "force carrier" particles in the Standard Model.
The Higgs boson is the most recently discovered particle in the standard model (Art by Levi Qışın).
Why is gravity different? The best description we have of gravity is Einstein's theory of general relativity, which says gravitational effects come from curvature in the fabric of spacetime. This is an excellent way to describe the behaviors of huge objects, such as stars and galaxies, over large distances. But at small distance scales (like atoms) or high energies (such as those seen in a particle accelerator or in the early universe), this description breaks down.
People have certainly tried to create a quantum theory of gravity. This would involve a force carrier particle called a graviton. But the theory of quantum physics and the theory of general relativity don't play well together. The problem is the different ways they treat energy. Quantum physics says that when you have enough energy in a system, force-carrier particles can be created. (The timing of their appearance is random, but it's easy to predict what will happen on average, just as we know that when we flip a coin over and over, we'll get tails about half the time.) General relativity says that the shape of spacetime itself contains energy. So why aren't we detecting random bursts of energy from outer space, as gravitons are created and destroyed?
String theory is one possible answer to this question. String theory says that the smallest things in the universe are not point particles. They extend in one dimension, like minuscule loops or squiggles— thence the name string. Strings with different amounts of energy correspond to the particles with different properties that we can detect in a lab.
A diagram of a string
The simplicity of this setup is compelling. Even better, it solves the infinite energy problem. Interactions that would occur at a particular moment in spacetime, in the point particle model, are smoothed out over a wider area of spacetime if we model those interactions with strings. But string theory does pose some conceptual complications. To explain them, let's look at the underlying mathematical ideas more carefully.
In general relativity, we think of space and time together as a multidimensional geometric object, four-dimensional spacetime. Abstractly, the evolution of a single particle in time is a curve in spacetime that we call its worldline. If we start with a string instead of a point particle, over time it will trace out something abstractly two-dimensional, like a piece of paper or a floppy cylinder. We call this the worldsheet. One can imagine embedding that worldsheet into higher-dimensional spacetime. From there, we have a standard procedure to create a quantum theory, called quantization.
A worldsheet
If we work with four-dimensional spacetime, we run into a problem at this point. In general relativity, the difference between time and the other, spatial dimensions is encoded by a negative sign. Combine that negative sign with the standard quantization procedure, and you end up predicting quantum states—potential states of our universe, in this model—whose probability of occurring is the square root of a negative number. That's unphysical, which is a nice way of saying "completely ridiculous."
Since every spatial dimension gives us a positive sign, we can potentially cancel out the negatives and remove the unphysical states if we allow our spacetime to have more than four dimensions. If we're trying to build a physical theory that is physically realistic, in the sense of having both bosonic and fermionic states (things like photons and things like electrons), it turns out that the magic number of spacetime dimensions is ten.
If there are ten dimensions in total, we have six extra dimensions! Since we see no evidence of these dimensions in everyday life, they must be tiny (on a scale comparable to the string length), and compact or curled up. Since this theory is supposed to be compatible with general relativity, they should be "flat" in a precise mathematical sense, so their curvature doesn't contribute extra gravitational energy. And to allow for both bosons and fermions, they should be highly symmetric. Such six-dimensional spaces do exist. They're called Calabi-Yau manifolds: Calabi for the mathematician who predicted their existence, Yau for the mathematician who proved they really are flat.
A slice of a Calabi-Yau manifold
String dualities
One of the surprising things about string theory, and one of the most interesting from a mathematical perspective, is that fundamentally different assumptions about the setup can produce models of the universe that look identical. These correspondences are called string dualities.
The simplest string duality is called T-duality (T is for torus, the mathematical name for doughnut shapes and their generalizations). Suppose the extra dimensions of the universe were just a circle (a one-dimensional torus). A string's energy is proportional to its length; we can't directly measure the length of a string, but we can measure the energy it has. However, a string wrapped once around a big circle and a string wrapped many times around a small circle can have the same length! So the universe where the extra circle is radius 2 and the universe where the radius is ½ look the same to us. The same holds for the universes of radius 3 and 1/3, 10 and 1/10, or generally $R$ and $1/R$.
Strings wrapped once or many times
But what if we want a more physically realistic theory, where there are six extra dimensions of the universe? Well, we assume that the two-dimensional string worldsheet is mapping into these six extra dimensions. Our theory will have various physical fields, similar to the electromagnetic field.
To keep track of what a particular field is doing back on the worldsheet, we use coordinates $x$ and $y$. We can combine those coordinates into a single complex number $z$ = $x$ + $iy$. That $i$ there is an imaginary number. When I first learned about imaginary numbers, I was certain they were the best numbers, since they used the imagination; I know that "Why are you wasting my time with numbers that don't even exist?" is a more typical reaction. In this case, though, $i$ is standing in for a very concrete concept, direction: changing $x$ moves right or left, while changing $iy$ moves up or down. If we simultaneously increase $x$ a little bit and $y$ a little bit, we'll move diagonally right and up; we can think of that small diagonal shift as a little change in $z$ = $x$ + $iy$. If you want to be able to move all around the plane, just increasing or decreasing $z$ like this isn't enough. Mathematicians use $\bar{z}$ = $x$ – $iy$ to talk about motion that goes diagonally right and down.
$z$ and $\bar{z}$
Now, back to building our string theory. The fields depend on $x$ and $y$, but they're highly symmetric: to figure out how they act on the whole worldsheet, it's enough to know either how they change either based on a little change in $z$, or based on a little change in $\bar{z}$ (so we don't have to measure right-and-up and left-and-down changes separately). If you have two fields like this, they might change in similar ways (both varying a lot due to small changes in $z$, say), or they might change in different ways (one depending on $z$ and the other on $\bar{z}$).
From the physics point of view, this choice is not a big deal. You're just choosing either two plus signs (this choice is called the B-model) or a plus and a minus sign (the A-model). Either way, you can carry on from there and start working out all the physical characteristics of these fields, trying to understand predictions about gravity, and so on and so forth. Because this choice really doesn't matter, it shouldn't make any difference to your eventual predictions. In particular, any type of universe you can describe by choosing two plus signs and working out the details should also be a type of universe you can describe by choosing one plus and one minus, then working out those details.
How do we match up those two types of universes? By choosing different shapes for the six extra dimensions. Using this logic, physicists predicted that if you picked a specific shape for the extra dimensions of the universe and worked out the details of the A-model, you should be able to find a different shape that would give you the same physical details once you worked out its B-model theory.
Now, I said the sign choice wasn't a big deal from the physical perspective. But it's a huge deal from the mathematical perspective. If you only choose plus signs, you can rewrite everything that happens in terms of just powers of $z$, and start doing algebra. Algebra is great! You can program your computer to do algebra, and find lots of information about your six-dimensional space really fast! On the other hand, if you choose one plus and one minus sign, you're stuck doing calculus (a very special kind of multivariable, multidimensional calculus, where experts engage in intense arguments about what sorts of simplifying assumptions are valid).
Thus, when physicists came along and said, "Hey, these two kinds of math give you the same kinds of physical predictions," that told mathematicians they could turn incredibly difficult calculus problems into algebra problems (and thereby relate two branches of mathematics that had previously seemed completely different). Mathematicians call this insight, and the research it inspired, "mirror symmetry."
Xenia de la Ossa lectures on mirror symmetry. (Photo by Joselen Pena, CC BY-SA 4.0.)
If you would like to learn more about current research in mirror symmetry, here are some resources!
Kevin Hartnett, Mathematicians Explore Mirror Link Between Two Geometric Worlds
Hartnett writes about recent developments in mirror symmetry for Quanta Magazine.
Evelyn Lamb, Why Mirror Symmetry Is Like Fancy Ramen
Lamb describes her interview with Kevin Knudson and me about mirror symmetry.
Timothy Perutz, The Mirror's Magic Sights: An Update on Mirror Symmetry
A more technical description of recent progress in mirror symmetry. | CommonCrawl |
Proportion of dementia in Australia explained by common modifiable risk factors
Kimberly Ashby-Mitchell ORCID: orcid.org/0000-0002-4709-27371,
Richard Burns1,
Jonathan Shaw2 &
Kaarin J. Anstey1
At present, dementia has no known cure. Interventions to delay onset and reduce prevalence of the disease are therefore focused on risk factor reduction. Previous population attributable risk estimates for western countries may have been underestimated as a result of the relatively low rates of midlife obesity and the lower weighting given to that variable in statistical models.
Levin's Attributable Risk which assumes independence of risk factors was used to calculate the proportion of dementia attributable to seven modifiable risk factors (midlife obesity, physical inactivity, smoking, low educational attainment, diabetes mellitus, midlife hypertension and depression) in Australia. Using a recently published modified formula and survey data from the Australia Diabetes, Obesity and Lifestyle Study, a more realistic population attributable risk estimate which accounts for non-independence of risk factors was calculated. Finally, the effect of a 5–20% reduction in each risk factor per decade on future dementia prevalence was computed.
Taking into consideration that risk factors do not operate independently, a more conservative estimate of 48.4% of dementia cases (117,294 of 242,500 cases) was found to be attributable to the seven modifiable lifestyle factors under study. We calculated that if each risk factor was to be reduced by 5%, 10%, 15% and 20% per decade, dementia prevalence would be reduced by between 1.6 and 7.2% in 2020, 3.3–14.9% in 2030, 4.9–22.8% in 2040 and 6.6–30.7% in 2050.
Our largely theory-based findings suggest a strong case for greater investment in risk factor reduction programmes that target modifiable lifestyle factors, particularly increased engagement in physical activity. However, further data on risk factor treatment and dementia risk reduction from population-based studies are needed to investigate whether our estimates of potential dementia prevention are indeed realistic.
Dementia describes a collection of symptoms associated with impaired memory and is characterised by progressive declines in thinking ability, physical function and behaviour [1]. Dementia has no known cure, and as it progresses so too does the inability to perform tasks of daily living [2]. At present, more than 40 million people worldwide are estimated to have the condition, with over US$600 billion spent on treatment and management [3, 4]. This figure is projected to increase to well over 70 million people by 2030 [4]. Such worrying future prevalence estimates highlight the need for urgent intervention focused on risk reduction because even a modest delay in onset can result in significant public health gains.
Cognitive decline and dementia are multi-causal. Research has shown that lifestyle factors (e.g. smoking habits, diet and physical inactivity) increase the risk of late-life dementia and that interventions targeting these can significantly reduce the population prevalence of dementia [5–7]. The potential impact of possible interventions to delay the onset of dementia on future prevalence of the condition has been reported previously for the world and specific regions [8, 9]. It is estimated that any intervention which could delay the onset of dementia by 1 year could reduce worldwide cases by 11% [10], while a 2-year and 5-year delay in onset could reduce the cumulative number of people developing dementia by 13% and 30% respectively [8]. In Australia, published research highlights that as little as a 10% reduction in dementia cases attributable to key modifiable lifestyle factors could result in savings of $280 million [11]. Delaying dementia onset therefore not only lessens the average number of years spent living with the disease but also has significant public health resource allocation implications [12].
Using a method published previously [9], we estimated the proportion of dementia in the Australian setting attributable to seven modifiable risk factors shown to be associated with the disease in the literature (midlife obesity, physical inactivity, smoking, low educational attainment, diabetes mellitus, midlife hypertension and depression).
Our study is novel because we take into account non-independence of risk factors, thereby providing more realistic population attributable risk (PAR) estimates for Australia than those obtained using the traditional Levin formula. In addition, we aimed to examine the effect of reducing the relative prevalence of each risk factor by 5%, 10%, 15% and 20% per decade (compounding reductions) on the future prevalence of dementia. Finally, we wanted to compare our PAR estimates with those produced previously for the USA, the UK, Europe and Australia.
To the authors' knowledge, this is the only Australian study to provide estimates of PARs and future dementia prevalence taking into consideration non-independence of risk factors and utilising such a wide range of modifiable risk factors. This work will allow for comparison of Australia with other countries and other regions, and will inform dementia risk reduction policies.
PAR allows researchers and policymakers to estimate how much disease could be eliminated if there was a reduction in the prevalence of a causal factor or groups of interrelated factors [13]. For the calculation of PAR, relative risk and disease prevalence data are needed [14]. In the present study, the population prevalence of each risk factor was obtained from the 2011–2013 Australian Health Survey (the largest and most comprehensive health survey ever conducted in Australia) [15]. This survey represents a collation of the National Health Survey (n = 20,500 persons; one adult and one child from 15,500 households), the National Aboriginal and Torres Strait Islander Health Survey (n = 13,000), the National Nutrition and Physical Activity Survey (n = 12,000 persons; one adult and one child from 9500 households) and a National Health Measures Survey (11,000 survey participants aged 5+ years). The survey utilised a range of data collection methods including questionnaires, blood and urine tests and pedometers, and aimed to collect information about health status, risk factors, socio-economic circumstances, health-related actions, nutrition, physical activity and use of medical services. Table 1 presents the definitions for each of the risk factors included in the study.
Table 1 Risk factor definitions
Meta-analyses examining the association between dementia and the seven risk factors of interest were used to obtain relative risk data. Table 2 presents the relative risk and prevalence data utilised in this study and their sources.
Table 2 Prevalence and relative risk data sources
Levin's Population Attributable Risk formula was used to calculate the proportion of dementia cases attributable to each of the risk factors under investigation [16]:
$$ \mathrm{P}\mathrm{A}\mathrm{R} = \kern0.5em \Big[ P\kern0.5em \times \kern0.5em \left( RR\kern0.5em -\kern0.5em 1\right)/\ \Big(1+ P \times \kern0.5em \left( RR\kern0.5em -\kern0.5em 1\Big]\right)\Big], $$
where P = population prevalence and RR = relative risk.
Still assuming independence of risk factors, we estimated their combined effect [17]:
$$ \begin{array}{l}\mathrm{Combined}\ \mathrm{PAR} = \kern0.5em 1\kern0.5em -\kern0.5em \left(1\kern0.5em -\kern0.5em PA{R}_{midlife\ obesity}\right)\times \left(1\kern0.5em -\kern0.5em PA{R}_{physical\ inactivity}\right)\times \\ {}\left(1\kern0.5em - PA{R}_{smoking}\right)\times \left(1\kern0.5em -\kern0.5em PA{R}_{low\ education\ attainment}\right) \times \left(1\kern0.5em -\kern0.5em PA{R}_{diabetes\ mellitus}\right)\times \\ {}\left(1\kern0.5em -\kern0.5em PA{R}_{midlife\ hypertension}\right)\times \left(1\kern0.5em -\kern0.5em PA{R}_{depression}\right)\end{array} $$
We accounted for non-independence of risk factors by using a previously published modified formula which takes into account the unique contribution of each risk factor 'w' [9]:
$$ P A{R}_{Adjusted\ Combined}\kern0.5em =\kern0.5em 1\kern0.5em - \Pi 1\kern0.5em -\kern0.5em \left( w \times \kern0.5em PAR\right). $$
Factor analysis was used to estimate communality for each risk factor using data for adults aged 25 years and older from the Australia Diabetes, Obesity and Lifestyle Study—Wave 3 (AusDiab). The AusDiab is a population-based national survey of the general (non-institutionalised) Australian population aged 25 years and older.
The total number of dementia cases related to each of the seven risk factors was calculated as the product of their individual PARs and dementia prevalence.
The effect of reducing the relative prevalence of each risk factor by 5%, 10%, 15% or 20% per decade on the future prevalence of dementia in Australia was calculated using published dementia prevalence estimates for Australia [18, 19].
Table 3 presents the results of PAR calculations taking into consideration both independence and non-independence of risk factors. Confidence limits for PAR and the number of attributable cases were calculated using a published substitution method [20].
Table 3 PAR of dementia for each risk factor and number of cases attributable in 2010
Assuming independence of risk factors, we estimated that the seven risk factors examined contribute up to 57.0% of dementia cases in Australia. In order to account for interaction between risk factors, data for those aged 25 years and older from the AusDiab were used to estimate the shared variance for all seven risk factors (presented in Table 2). More specifically, to obtain communality we used STATA version 12 to generate a matrix of tetrachoric correlations and subsequently performed exploratory factor analysis using the correlation matrix as input. Similar to other studies, we used the Kaiser criterion for selecting the number of factors to retain [9]. Accounting for non-independence of risk factors, we estimated that the seven risk factors contributed 48.4% of dementia cases in Australia.
Effect of risk factor reduction
Table 4 and Fig. 1 show the effect of a 5%, 10%, 15% and 20% per decade reduction in each risk factor on future dementia prevalence estimates.
Table 4 Effect of a 5%, 10%, 15% and 20% per decade reduction in each risk factor on future dementia prevalence (2010–2050)
Percentage change in dementia cases as a result of a 5%, 10%, 15% and 20% reduction in each risk factor per decade. Estimated reduction in dementia prevalence that could result from a 5–20% per decade reduction in the prevalence of the risk factors under study
Assuming independence of risk factors, 57.0% of dementia cases in Australia could be related to the seven modifiable risk factors under study. Taking into consideration the non-independence of risk factors, we estimated that approximately 48.4% of dementia cases could be attributed to these risk factors. A reduction of between 5 and 20% per decade would have the effect of reducing future dementia prevalence by between 1.6 and 30.7% from 2020 to 2050. Our findings show that the percentage of cases explained by the risk factors under study is higher than that explained by APOE e4 (the main risk factor for AD). This highlights the importance of targeting modifiable risk factors in dementia reduction policies and programmes.
Similar to a previously published study which utilised data from the USA, Europe and the UK [14], we found that physical inactivity was related to the largest proportion of dementia cases. PAR estimates from our study and those published for the USA, Europe and the UK can also be compared because both studies examined midlife obesity, physical inactivity, smoking, low educational attainment, diabetes mellitus and midlife hypertension [9]. Although reported for AD cases, the authors of the international study suggest that their PAR estimates can be applied to the most common forms of dementia [9]. Overall, midlife obesity is related to a greater proportion of dementia cases in Australia (17.0%) than in the USA (7.3%), Europe (4.1%) and the UK (6.6%) [9], while physical inactivity is related to a relatively smaller proportion of cases in Australia (17.9%) than in the USA (21.0%), Europe (20.3%) and the UK (21.8%). Notably, a smaller proportion of dementia cases is attributable to smoking in Australia (8.7%) when compared with the USA (10.8%), Europe (13.6%) and the UK (10.6%). The proportion of dementia cases attributable to low educational attainment in the USA (7.3%), the UK (12.2%) and Europe (13.6%) is lower than that of Australia (14.7%). Our PAR estimate for diabetes mellitus is lower than those recorded for USA (4.5%) and Europe (3.1%) but higher than the UK estimate (1.9%). These differences may be predominantly due to the prevalence of these individual risk factors in each country and also to variations in risk factor definitions used between studies. For example, the midlife obesity prevalence for the USA utilised in Norton et al.'s study (13.1%) [14] is lower than our Australian estimate (32.0%) and also lower than the US Centers for Disease Control's 2015 estimate (40.2%) [2]. The contribution of this risk factor to dementia prevalence is therefore likely to be significantly higher than calculated and more in line with our Australian PAR estimate. In addition, our estimate for low educational attainment may be higher than that published for other countries because our definition included all those who had a primary and/or secondary school education only (i.e. up to Year 12). Prior studies have used a less inclusive definition for low education (i.e. up to lower secondary schooling). We noted that the small difference between the analysis assuming independence and accounting for non-independence in our study (57.0% vs 48.4%) is in contrast to those published for the USA (52.7% vs 30.6%), Europe (54.0% vs 31.4%) and the UK (52.0% vs 30.0%). While we are unable to fully account for this observed difference, one possible explanation may be that it could be due to the effect of the computed weight 'w', which represents the proportion of the variance shared with other risk factors and which was comparatively higher in our study for all variables. While the assumptions on which the models for the UK, the USA and Australia are based are the same, the comparatively high 'w' value in Australia indicates that communality is low among risk factors and is suggestive of low co-morbidity among the risk factors under study. Further analysis of the primary dataset employed would need to be conducted in order to elucidate the exact cause of this disparity. It should, however, be acknowledged that low co-morbidity may be a result of various policy measures such as tobacco control efforts. Another important factor to be taken into account is the competing risk of death as a result of increased age and co-morbidities of older participants.
Our study builds on the methodology used in a recent Alzheimer's Australia report based on 2014 ABS population projection data [11]. This report calculates PAR estimates assuming that risk factors operate independently. Here, we have accounted for non-independence of risk factors using a population-based sample and utilised compounding reductions each decade in order to determine the effect on future prevalence of disease. In both studies, however, physical activity explained the greatest proportion of dementia cases in the sample (24.8% and 17.9% respectively) [11]. A closer comparison reveals that the prior published estimates were higher for all commonly examined risk factors except midlife obesity, low education attainment and diabetes mellitus (midlife obesity: 13.9% vs 17.0%, physical inactivity: 24.8% vs 17.9%, smoking: 9.4% vs 4.3%, low educational attainment: 7.3% vs 14.7%, diabetes mellitus: 1.9% vs 2.4%, midlife hypertension: 16.3% vs 13.7% and depression: 8.9% vs 8.0%) [11]. These differences may be due to different sources of risk factor prevalence data and behaviour modification, for example smoking cessation. We were, however, unable to compare the combined effect of all examined risk factors because our study was the only one to take these into account.
Most of the effect size estimates used for relative risk in our study differ from those used in previous publications [9, 11]. While it would have been useful to utilise relative risks from Australian cohort studies, such data were not readily available for the risk factors being examined. As such, our relative risk estimates are taken from the World Alzheimer Report 2014 which included more recently published data [21]. The estimates used were lower for smoking and physical inactivity in our study but higher for midlife obesity and low educational attainment. Sensitivity analysis conducted using the relative risks used in Norton et al.'s study show that the proportions of dementia explained by midlife obesity (16.1%) and physical inactivity (31.5%) were higher in Australia than in the USA, Europe and the UK while that of smoking was lower (8.7%) (see Additional file 1). The Australian contribution of low educational attainment (12.4%) was higher than the USA and UK estimates but lower than that in Europe. PAR estimates for diabetes mellitus, midlife hypertension and depression remained unchanged because the same relative risks were used in both studies. Overall, both combined PAR and adjusted combined PAR were higher for our study when the relative risk estimates used in previous studies were utilised (Combined PAR:57.0% vs 64.3% and Adjusted Combined PAR: 48.4% vs 55.7% respectively).
Dementia may be delayed or prevented by targeting modifiable lifestyle factors [11, 22]. For example, Access Economics estimated that a 5-year delay in Alzheimer's disease (the most common form of dementia) onset from 2005 would decrease prevalence by 48.5% in 2040 [12]. Other studies have calculated that any intervention which could delay onset by 5 years could decrease prevalence by between 37.0 and 44.0% [23, 24]. These projection estimates are higher than those reported in our study and notably do not take into account the dynamic interplay between risk factors which have been considered in our study. Although imprecise, our estimates are more realistic and conservative.
The modifiable lifestyle factors considered in our study have also been recognised as risk factors for developing other conditions such as cardiovascular disease and certain cancers, all of which are leading causes of death in Australia [25]. Our findings present a strong case for greater investment in lifestyle interventions in preventing dementia because these have the potential to reap other health and well-being benefits as well. Reducing the prevalence of or delaying the onset of dementia has the potential to lessen the impact of the disease, both financially and on individuals [26]. Delaying dementia onset lessens the average number of years spent living with the disease [12]. Those living with dementia for longer periods tend to require considerably more health services per annum than newly diagnosed individuals and this has substantial public health resource allocation implications [12].
Because physical inactivity was shown to contribute to the greatest proportion of dementia cases, this suggests that targeted interventions aimed at those who are not meeting recommendations may have the effect of reducing dementia prevalence. Policymakers, however, must be cognisant of the fact that no singular government intervention/policy, operating on its own, can directly reduce dementia onset/prevalence and change lifestyle habits [27]. Further research is needed to examine the monetary investment and time needed to reduce risk factor prevalence (especially physical inactivity prevalence) to a level that will result in significant improvement in the overall prevalence of dementia and other chronic diseases. It is also worth noting that while dementia risk reduction has the effect of increasing longevity and delaying onset of disease, it may not necessarily prevent the disease. Previous research has pointed to the longevity paradox—age is the strongest predictor of diseases that affect cognition and risk reduction has the potential to increase life expectancy [28].
To the authors' knowledge, this is the only Australian study to provide estimates of population attributable risks (PARs) and future dementia prevalence taking into consideration non-independence of risk factors. This is also the only the Australian study to examine the contribution of such a wide range of modifiable risk factors. The methodology utilised in this article has only been reported on once in the literature and represents a notable addition to the traditional Levin formula used to calculate PAR in order to account for non-independence of risk factors [9]. In our calculations, we have utilised the most recent effect size estimates and have used a more realistic estimate of midlife obesity than other published work [14]. Our study therefore makes a valuable contribution to the research.
Limitations of using this method have been presented in a previous study [9]. These include use of biased adjusted relative risk estimates obtained from meta-analyses and the integrity of the method being untested [9]. However, although the methodology is still new, it is thought to provide a more realistic estimate of PAR. As further studies are conducted using this adjusted PAR calculation, there will be the opportunity to compare results across various geographic locations and test robustness. In interpreting our results, we indicate that it is beyond the scope of this study to prove that dementia can be prevented or that such major reductions in prevalence (up to 30.7%) are indeed possible. Further, the effects of risk factor reduction programmes depend on the stage of the lifecycle they are aimed to target, and many of our included risk factors focus on midlife. Further, there is a dynamic interplay among risk factors throughout the life course. In addition, it is possible that our figures overestimate the potential gain of risk reduction because they do not take into account that risk reduction leads to increased longevity which itself is a risk factor for dementia [28]. We are cognisant of our use of a relatively modest method and have considered utilising more informative models that examine the most likely future scenarios in our further work which take into account that each of the risk factors may be on a different trajectory. For example, educational attainment and smoking are both improving, while the prevalence of diabetes and obesity are increasing. Thus, improvements from the 'base case' are likely to be different for each risk factor. Further, as noted in a prior study, our analyses report on association between risk factors and disease and do not attempt to determine causality—the real link between the risk factors examined and dementia may be accounted for by other risk factors. Finally, while a substantial proportion of dementia cases was found to be attributable to the risk factors under study, we did not examine the contribution of other risk factors that have been examined in the literature for dementia such as fruit, vegetable, meat, fish and omega-3 intake and midlife serum cholesterol. Further studies are needed that take these into consideration. In addition, there is a need for more research into the effect of risk factor reduction on dementia from population-based studies in order to examine more nuanced issues such as effect size at various stages in the lifecycle and the costs and specific actions needed to have the greatest public health impact. Such data will no doubt prove useful to policymakers.
Assuming that risk factors do not operate independently, approximately 48.4% of dementia cases in Australia can be potentially attributed to midlife obesity, physical inactivity, smoking and low educational attainment. Any intervention that reduces the prevalence of these by 5%, 10%, 15% or 20% per decade can have a significant public health impact, especially with regards to lowering the direct and indirect costs incurred by both governments and those living with the disease. Further research is needed which aims to provide policymakers with a set plan of action for achieving dementia risk reduction goals.
AusDiab:
Australia Diabetes, Obesity and Lifestyle Study
Population attributable risk
RR:
Relative risk
World Health Organisation. Dementia Fact Sheet. 2015. http://www.who.int/mediacentre/factsheets/fs362/en/. Accessed 19 Jan 2016.
U.S. Department of Health and Human Services Centers for Disease Control and Prevention. Cognitive Impairment: A Call for Action, Now! 2011. http://www.cdc.gov/aging/pdf/cognitive_impairment/cogimp_poilicy_final.pdf. Accessed 3 Feb 2016.
World Health Organisation. Dementia—Factsheet. 2015. http://www.who.int/mediacentre/factsheets/fs362/en/. Accessed 15 May 2015.
Alzheimer's Disease International. Dementia Statistics. 2013. http://www.alz.co.uk/research/statistics. Accessed 15 May 2015.
Anstey KJ, Mack HA, Cherbuin N. Alcohol consumption as a risk factor for dementia and cognitive decline: meta-analysis of prospective studies. Am J Geriatr Psych. 2009;17(7):542–55.
Morris MC. The role of nutrition in Alzheimer's disease: epidemiological evidence. Eur J Neurol. 2009;16:1–7.
Scarmeas N, et al. Mediterranean diet and mild cognitive impairment. Arch Neurol. 2009;66(2):216–25.
Vickland V, Morris T, Draper B, Low LF, Brodaty H. Modelling the impact of interventions to delay the onset of dementia in Australia. A report for Alzheimer's Australia. 2012.
Norton S, Matthews E.F, Barnes DE, Yaffe K, Brayne C. Potential for primary prevention of Alzheimer's disease: an analysis of population-based data. The Lancet Neurology. 2014;13(8):788-94.
Brookmeyer R, et al. Forecasting the global burden of Alzheimer's disease. Alzheimers Dement. 2007;3(3):186–91.
Moore B, et al. Reducucing the prevalence of Alzheimer's disease: modifiable risk factors or social determinants of health? Sydney: Alzheimer's Australia; 2015.
Access Economics PTY Limited. Delaying the Onset of Alzheimer's Disease: Projections and Issues. Canberra: Alzheimer's Australia; 2004.
Rockhill B, Newman B, Weinberg C. Use and misuse of population attributable fractions. Am J Public Health. 1998;88(1):15–9.
Norton S, et al. Potential for primary prevention of Alzheimer's disease: an analysis of population-based data. Lancet Neurol. 2014;13(8):788–94.
Australia Bureau of Ststistics. Health Survey 2011–2013. Canberra, Australia: Australian Bureau of Statistics; 2014.
Levin M. The occurrence of lung cancer in man. Acta-Unio Internationalis Contra Cancrum. 1952;9(3):531–41.
Barnes DE, Yaffe K. The projected effect of risk factor reduction on Alzheimer's disease prevalence. Lancet Neurol. 2011;10(9):819–28.
Access Economics PTY Limited. Dementia across Australia: 2011–2050. 2011. https://fightdementia.org.au/sites/default/files/20111014_Nat_Access_DemAcrossAust.pdf. Accessed 16 May 2015.
Access Economics PTY Limited. Keeping dementia front of mind: incidence and prevalence 2009–2050. 2009. https://fightdementia.org.au/sites/default/files/20090800_Nat__AE_FullKeepDemFrontMind.pdf. Accessed 15 May 2014.
Daly LE. Confidence limits made easy: interval estimation using a substitution method. Am J Epidemiol. 1998;147(8):783–90.
Prince M, Albanese E, Guerchet M. World Alzheimer Report 2014. Alzheimer's Disease International; 2014.
Cosentino S, et al. Physical activity, diet, and risk of Alzheimer disease. JAMA. 2009;302(6):627–37.
Jorm AF, Jolley D. The incidence of dementia: a meta-analysis. Neurology. 1998;51(3):728.
Vickland V, et al. Who pays and who benefits? How different models of shared responsibilities between formal and informal carers influence projections of costs of dementia management. BMC Public Health. 2011;11:793.
Australia Bureau of Statistics. Causes of Death, Australia, 2012. 2014. http://www.abs.gov.au/ausstats/[email protected]/Lookup/3303.0main+features100012012. Accessed 19 Mar 2015.
Alzheimer's Australia. Response to the Productivity Commission's Economic Implications of an Ageing Australia: Draft Report. Canberra, Australia: Alzheimer's Australia; 2005.
Ashby-Mitchell K. The Road to Reducing Dementia Onset and Prevalence—Are diet and physical activity interventions worth investing in? In: Issues Brief. Canberra: Deeble Policy Institute; 2015.
Anstey KJ, et al. The influence of smoking, sedentary lifestyle and obesity on cognitive impairment-free life expectancy. Int J Epidemiol. 2014;43(6):1874-1883.
Australia Bureau of Statistics. Australian Health Survey 2011–2013. 2015. http://www.abs.gov.au/ausstats/[email protected]/Lookup/4364.0.55.001Chapter1202011-12. Accessed 7 Jul 2015.
Begg S, et al. The burden of disease and injury in Australia 2003. Canberra: AIHW; 2007.
Hamer M, Chida Y. Physical activity and risk of neurodegenerative disease: a systematic review of prospective evidence. Psychol Med. 2009;39(1):3–11.
Dunstan DW, et al. The Australian Diabetes, Obesity and Lifestyle Study (AusDiab)—methods and response rates. Diabetes Research and Clinical Practice. 2002;57(2):119-129.
Access Economics PTY Limited. Dementia Estimates and Projections: Queensland and its Regions. Canberra: Alzheimer's Australia; 2007.
The authors acknowledge support from the NHMRC Dementia Collaborative Research Centres. The AusDiab, co-coordinated by the Baker IDI Heart and Diabetes Institute, gratefully acknowledges the support and assistance given by K Anstey, B Atkins, B Balkau, E Barr, A Cameron, S Chadban, M de Courten, D Dunstan, A Kavanagh, D Magliano, S Murray, N Owen, K Polkinghorne, J Shaw, T Welborn, P Zimmet and all of the study participants.
Also, for funding or logistical support, the authors are grateful to the National Health and Medical Research Council (NHMRC grants 233200 and 1007544), Australian Government Department of Health and Ageing, Abbott Australasia Pty Ltd, Alphapharm Pty Ltd, Amgen Australia, AstraZeneca, Bristol-Myers Squibb, City Health Centre-Diabetes Service-Canberra, Department of Health and Community Services—Northern Territory, Department of Health and Human Services—Tasmania, Department of Health—New South Wales, Department of Health—Western Australia, Department of Health—South Australia, Department of Human Services—Victoria, Diabetes Australia, Diabetes Australia Northern Territory, Eli Lilly Australia, Estate of the Late Edward Wilson, GlaxoSmithKline, Jack Brockhoff Foundation, Janssen-Cilag, Kidney Health Australia, Marian & FH Flack Trust, Menzies Research Institute, Merck Sharp & Dohme, Novartis Pharmaceuticals, Novo Nordisk Pharmaceuticals, Pfizer Pty Ltd, Pratt Foundation, Queensland Health, Roche Diagnostics Australia, Royal Prince Alfred Hospital, Sydney, Sanofi Aventis, sanofi-synthelabo and the Victorian Government's OIS Program.
This work was supported by the Australian Research Council Centre of Excellence in Population Ageing Research (Project number CE110001029), the Australian Research Council (Fellowship #120100227) and the National Health and Medical Research Council (Fellowship #1002560 and APP1079438). RB is funded by the Australian Research Council Centre of Excellence in Population Ageing Research (Project #CE110001029). KJA is funded by the National Health and Medical Research Council (Fellowship #1002560).
AusDiab data which support the findings of this study are not publicly available but access can be requested from Baker IDI Heart and Diabetes Institute using the following link: https://www.bakeridi.edu.au/-/media/Documents/impact/AusDiab/AusDiab-data-access-form.ashx?la=en.
KA-M was responsible for conceptualising and designing the study, conducting analyses, interpreting the data and preparing the manuscript. RB assisted in data interpretation and clarifying statistical methods. JS interpreted the data and revised the manuscript for intellectual content. KJA interpreted the data and revised the manuscript for intellectual content. All authors read and approved the final manuscript.
The AusDiab was approved by the Ethics Committee of the International Diabetes Institute in Melbourne. Written informed consent was obtained from all participants at baseline and follow-up.
Centre for Research on Ageing, Health and Wellbeing, The Australian National University, Florey, Building 54, Mills Road, Acton, ACT 2601, Australia
Kimberly Ashby-Mitchell, Richard Burns & Kaarin J. Anstey
Baker IDI Heart and Diabetes Institute, 75 Commercial Road, Melbourne, VIC, 3004, Australia
Kimberly Ashby-Mitchell
Kaarin J. Anstey
Correspondence to Kimberly Ashby-Mitchell.
Results of Sensitivity Analysis – PAR of dementia for each risk factor and number of cases attributable in 2010. Table showing the PAR estimates obtained for Australia if the relative risks used in the Norton et.al. 2014 paper had been utilised in the present study. (DOCX 14 kb)
Ashby-Mitchell, K., Burns, R., Shaw, J. et al. Proportion of dementia in Australia explained by common modifiable risk factors. Alz Res Therapy 9, 11 (2017). https://doi.org/10.1186/s13195-017-0238-x | CommonCrawl |
Earth, Planets and Space
Express Letter
Data assimilation with dispersive tsunami model: a test for the Nankai Trough
Yuchen Wang ORCID: orcid.org/0000-0002-0262-18691,2,
Kenji Satake1,
Takuto Maeda1,3 &
Aditya Riadi Gusman1,4
Earth, Planets and Space volume 70, Article number: 131 (2018) Cite this article
We present a method of tsunami data assimilation using a linear dispersive model in order to provide an accurate tsunami early warning. To speed up the assimilation process, we use the Green's function-based tsunami data assimilation, in which the Green's functions are calculated in advance with linear dispersive tsunami propagation models. We demonstrate a test case in the Nankai Trough off southwest Japan, with a source model similar to the main shock of the 2004 off the Kii Peninsula earthquake (M7.4) which generated tsunamis with dispersive characteristics. We show that assimilation of existing ocean bottom pressure gauge data can rapidly forecast the tsunami arrival time and the maximum height of the first tsunami peak along the coast of Shikoku and Kyushu Islands. Both the linear long-wave model and the linear dispersive model can accurately forecast the tsunami height, but the linear dispersive model can predict the tsunami arrival time more accurately for the tested earthquake.
Tsunami data assimilation is a promising approach for tsunami forecasting. It predicts the tsunami waveform by assimilating offshore observed data into a numerical simulation, without calculating the initial sea surface height at the source (Maeda et al. 2015). An optimum interpolation method (Kalnay 2003) is adopted in data assimilation to compute the tsunami wavefield and to forecast the tsunami arrival time and maximum amplitude along the coast. This method has been successfully applied to observed tsunami waveforms of the 2012 Haida Gwaii Earthquake (Gusman et al. 2016) using bottom pressure gauge array data in the Cascadia subduction zone and a hypothetic tsunami in the Tohoku region (Maeda et al. 2015) using synthetic data based on the Seafloor Observation Network for Earthquakes and Tsunamis (S-net). If the observations are located in source regions, a new assimilation method developed by Tanioka (2018) can be used to solve the problem of non-hydrostatic effects and reproduce tsunami height distribution accurately.
In previous applications of the tsunami data assimilation method, the linear long-wave (LLW) tsunami propagation model was used (Maeda et al. 2015; Gusman et al. 2016; Wang et al. 2017). The LLW model is based on the long-wave approximation (Satake 2015). When the horizontal scale of motion, or the wavelength of the tsunami, is much larger than the water depth, the vertical acceleration of water is negligible compared with gravity. The horizontal motion of the water mass is a good approximation uniform from the ocean bottom to the surface. Then, the phase velocity only depends on the depth of the ocean.
However, the long-wave approximation breaks down when the wavelength of the water height distribution is not much greater than the water depth (Saito et al. 2010). For example, if an outer-rise earthquake fault has a large dip angle, the initial sea surface distribution will be enriched in short-wavelength components, which could not be simulated properly with the LLW model (Saito and Furumura 2009; Zhou et al. 2012). Moreover, landside tsunamis have smaller horizontal scales than tsunamis generated by earthquakes, and they present more evident dispersion effects (Marchenko et al. 2012). The LLW model may forecast the arrival time of a tsunami peak to be earlier than the real tsunami and may overestimate the maximum height of the tsunami waveform (Watada et al. 2014; Gusman et al. 2015; Ho et al. 2017). Therefore, a dispersive (DSP) tsunami model based on the Boussinesq equations should be used to compute tsunami waveforms with dispersive characteristics. The DSP tsunami model has been successfully used in forward tsunami simulation (Furumura and Saito 2009; Zhou et al. 2012) and tsunami waveform inversion analysis to estimate the initial water height distribution (Saito et al. 2010, 2011).
Until now, there has been no application of the DSP model in tsunami data assimilation. Because the tsunami data assimilation method proposed by Maeda et al. (2015) calculates the wavefield at every time step, an application of the DSP model with this method would lead to an extremely high computational cost and therefore fail to provide timely tsunami early warnings. To speed up the data assimilation process, Green's function-based tsunami data assimilation (GFTDA) was proposed to reduce the computational time for tsunami early warning (Wang et al. 2017). The Green's functions for data assimilation can be calculated and stored in advance. During the assimilation process, the waveforms at points of interest (PoIs) can be calculated by a simple matrix manipulation. The GFTDA enables us to conduct tsunami data assimilation with a linear dispersive model.
Great earthquakes have recurred along the megathrust fault between the Philippine Sea Plate and the Eurasian Plate along the Nankai Trough (Saito et al. 2010; Furumura et al. 2011). Such great earthquakes of recent years include the 1946 Tonankai (M7.9) and 1944 Nankai (M8.0) earthquakes. In addition, a large (M 7.4) earthquake occurred off the Kii Peninsula within the subducting Philippine Sea Plate in 2004. The tsunami generated by the 2004 off the Kii Peninsula earthquake exhibited evident dispersion characteristics at offshore stations such as MPG1 and MPG2 (Saito et al. 2010). One of the important features of tsunami generation is that the dispersive waves have a strong directional dependence with respect to the fault strike (Saito et al. 2010). Therefore, the dispersive tsunami developed efficiently toward the direction of the above offshore stations.
In order to monitor the earthquakes and tsunamis in the Nankai region, Japan Agency for Marine-Earth Science and Technology (JAMSTEC) set up the Dense Oceanfloor Network System for Earthquakes and Tsunamis (DONET). It is an elaborate system of cables and various devices and consists of 52 observation points in DONET1 and DONET2 (Kaneda 2010). Such a dense observation network enables us to forecast a tsunami along the Nankai coast by the method of tsunami data assimilation.
In this paper, we propose to use GFTDA to forecast the tsunami based on the linear DSP model. We compute synthetic tsunami observations in the Nankai region using the source parameters of the 2004 earthquake and forecast the tsunami along the coast of Shikoku Island and Kyushu Island. We also compare the forecasted results of arrival time and maximum height by assimilation using the LLW and DSP models.
Linear DSP tsunami model
Numerical simulations of 2-D DSP tsunami equations have been conducted on high-performance computers and personal computers to simulate dispersive tsunami waves (Saito et al. 2010). The equations are derived from the continuity equation and the equation of motion for water waves. In Cartesian coordinate, they are:
$$\begin{aligned} & \frac{\partial h}{\partial t} = - \frac{\partial M}{\partial x} - \frac{\partial N}{\partial y}, \\ & \frac{\partial M}{\partial t} + gd\frac{\partial h}{\partial x} = \frac{1}{3}h^{2} \frac{{\partial^{2} }}{\partial x\partial t}\left( {\frac{\partial M}{\partial x} + \frac{\partial N}{\partial y}} \right), \\ & \frac{\partial N}{\partial t} + gd\frac{\partial h}{\partial y} = \frac{1}{3}h^{2} \frac{{\partial^{2} }}{\partial y\partial t}\left( {\frac{\partial M}{\partial x} + \frac{\partial N}{\partial y}} \right), \\ \end{aligned}$$
where M and N are the x and y components of flux, or velocity integrated along the vertical direction from the sea bottom to the sea surface, h is the tsunami height at the sea surface, d is the water depth, and g is the gravitational acceleration.
The right-hand sides of the second and third equations are linear dispersive terms, which cause the dispersion effect (Saito and Furumura 2009; Saito et al. 2010; Maeda et al. 2016). In the LLW model, we neglect the dispersion terms, so the right-hand sides of these two equations become zero. Although the linear DSP model is more complicated, the linearity still enables us to use GFTDA in our research.
Optimal interpolation method
We adopt the optimal interpolation method for tsunami data assimilation, as in the previous studies of Kalnay (2003) and Maeda et al. (2015). The details of the method are described in these two papers. In this method, we assume the total number of computational grid points is L, and the total observation number is m. The tsunami wavefield at the nth time step is represented as a \(\left( {3L \times 1} \right)\) column vector \(\varvec{x}_{\varvec{n}} = \left( {h\left( {n\Delta t,x,y} \right),M\left( {n\Delta t,x,y} \right),N\left( {n\Delta t,x,y} \right)} \right)^{\text{T}}\).
The data assimilation process consists of two steps: a propagation step and an assimilation step. The propagation step is expressed as
$$\varvec{x}_{n}^{\text{f}} = \varvec{Fx}_{n - 1}^{a} ,$$
which means that at every time step, the forecasted tsunami wavefield \(\varvec{x}_{n}^{\text{f}}\) is simulated by solving the tsunami propagation equations using the assimilated wavefield in the last time step \(\varvec{x}_{n - 1}^{a}\). Here, \(\varvec{F}\) refers to the tsunami propagation matrix (\(3L \times 3L\)) that corresponds to the tsunami propagation model. The assimilation step is expressed as
$$\varvec{x}_{n}^{a} = \varvec{x}_{n}^{\text{f}} + \varvec{W}\left( {\varvec{y}_{n} - \varvec{Hx}_{n}^{\text{f}} } \right),$$
where the observation matrix \(\varvec{H}\) (\(m \times 3L\)) is a sparse matrix that extracts the forecasted tsunami heights at the m points from the entire simulated tsunami wavefield. The residual is calculated by comparing it with the real observed tsunami height \(\varvec{y}_{n}\). The weight matrix \(\varvec{W}\) (\(3L \times m\)) is an important controlling factor for the quality of tsunami assimilation (Maeda et al. 2015). It is calculated by minimizing the covariance matrix as a solution of the linear system
$$\varvec{W}\left( {\varvec{R} + \varvec{HP}^{\text{f}} \varvec{H}^{\text{T}} } \right) = \varvec{P}^{\text{f}} \varvec{H}^{\text{T}} ,$$
where \(\varvec{P}^{\text{f}} = \left\langle {\varepsilon^{\text{f}} \varepsilon^{\text{fT}} } \right\rangle\) and \(\varvec{R} = \left\langle {\varepsilon^{\text{O}} \varepsilon^{\text{OT}} } \right\rangle\) are the covariance matrices of the forward numerical simulation and the observations, respectively. Here, ɛf represents the errors in numerical forecasts between two computational grids and ɛO represents the observational errors (Kalnay 2003; Maeda et al. 2015). The weight matrix is then multiplied by the residual to bring the assimilated tsunami wavefield closer to the observed wavefield. By alternatively repeating the propagation and assimilation steps, the tsunami wavefield is assimilated.
Green's function-based tsunami data assimilation (GFTDA)
In order to speed up the data assimilation process and use the linear DSP model, we use the GFTDA proposed by Wang et al. (2017). The Green's function Gi,j is defined as the waveform at the jth grid point resulting from the propagation of the ith station's assimilation response. We compute the Green's functions between the fixed observation stations and PoIs with both the LLW model and the linear DSP model. The computation of Green's functions might be quite time-consuming, but this is done in advance. Moreover, this will not affect the efficiency of the data assimilation process. Then, during the assimilation process, we can directly synthesize the forecasted tsunami waveforms by multiplying the residual by the corresponding Green's functions.
Green's function
The bathymetry and topography dataset are derived from the General Bathymetric Chart of the Ocean (GEBCO). The grid data released in 2014 (GEBCO_2014) with a grid spacing of 30 arc s (Weatherall et al. 2015) are used.
The finite difference method (FDM) with the implicit scheme is used for numerical simulation. We use a grid spacing of 30 arc s and a time step of 1 s. The target area is 30°N–35°N, 130°E–140°E, so the total grid number is 600 × 1200 = 720,000.
The JAGURS tsunami code (Baba et al. 2015) is used to compute the Green's functions between the observation points and PoIs and also between different observation points. We use 15 observation stations and nine PoIs, as described in the next section. Hence, the total number of Green's functions is 15 × (9 + 15) = 360. The parameters for optimal interpolation are similar as those used in a study by Maeda et al. (2015).
Application and results
Observation stations and forecast points
We used the proposed method to simulated data for the Nankai Trough. DONET has 13 science nodes, and each node is linked with several bottom pressure observation points. In order to build an evenly distributed observation network for data assimilation, we take one point for each node except for Node C, for which we take two observation points. In addition, we use the submarine cable stations PG1 and PG2, also maintained by JAMSTEC. In total, 15 observation stations are used for data assimilation (Fig. 1). However, because DONET1 was completed in 2011 and DONET2 started operation in 2015, we lack a real tsunami record for the 2004 off the Kii Peninsula earthquake. To assess the ability of our data assimilation approach, we use synthetic tsunami data from the 2004 earthquake source model. In real practice, tsunami data could be obtained by removing the tide signal and high-frequency seismic signal from the observation record. The method of Tanioka (2018) can also be applied to the stations in or around the source area.
The observation network for data assimilation and near-shore PoIs. The observation points contain 13 DONET stations, PG1 and PG2, which are marked with red circles. Nine PoIs near Shikoku Island and Kyushu Island are marked with blue triangles. The focal mechanism is plotted according to Yamanaka (2004)
The nine points of interest (PoIs) are selected near population centers on the coast of Shikoku Island and Kyushu Island (Fig. 1), which are under the potential threat of tsunami disaster. They are used to compare simulated waveforms and waveforms predicted by data assimilation.
Source models
In our numerical simulation, we use the mainshock source model of the 2004 off the Kii Peninsula earthquake. The fault parameters are set according to the unpublished results of Yamanaka (data available at http://wwweic.eri.u-tokyo.ac.jp/sanchu/Seismo_Note/2004/EIC153.html) by analysis of the teleseismic body waves. The epicenter is 137.142°E, 33.143°N, and the depth is 10.0 km. The fault direction is perpendicular to the trough axis with a strike of 135°. The dip angle is 40°, and the rake angle is 123°. The length and width of the rectangular fault are 50.0 km and 30.0 km, respectively. The fault slip is 6.5 m, which is consistent with the magnitude of the main shock (M7.4).
We use Okada's model to calculate the initial sea surface elevation in an elastic half-space (Okada 1985), which can be used as the initial condition for numerical simulation. Here, we only consider the vertical displacement. If the tsunami source is on a steep seafloor and the horizontal motion is much larger than the vertical motion, horizontal displacement will become important for tsunami generation (Tanioka and Satake 1996).
Assimilation setting
We use the JAGURS tsunami code to calculate the tsunami propagation. We consider only linear terms and assume reflective boundary at the coastline. To make the tsunami propagation closer to the real situation, we apply the linear DSP model, similar to that used to compute the dispersive Green's functions.
We set the earthquake origin time as t = 0. The observation stations of the assimilation network are not far from the epicenter of the Kii Peninsula earthquakes. Hence, the tsunami arrives at the nearby stations of KMC09, KMC21, and KMD13 soon after the earthquake (Fig. 2). We set that the data assimilation process to begin at t = 0. The assimilation time window is defined as the period during which we use synthetic observation for assimilation. During the assimilation time window, the waveforms at PoIs are synthesized with the Green's functions and the simulated observation.
Distribution of 15 observation points and the waveforms of synthetic tsunamis at each point. The tsunami arrives at the stations of KMC09, KMC21, and KMD13 soon after the earthquake. The data assimilation process begins at the time of earthquake
We apply the GFTDA algorithm with both the LLW and linear DSP models. The length of the time windows is set to range from 2 to 24 min, with an interval of 2 min. The calculation time for the data assimilation process is less than 10 s, almost negligible on the EIC computer system at the Earthquake Research Institute, the University of Tokyo.
Waveform comparison
Figure 3 demonstrates the comparison between simulated waveforms and waveforms predicted with an assimilation time window of 20 min. The waveforms predicted using both the LLW model and the linear DSP model agree well with the simulated waveforms. This proves the validity of data assimilation based on the observation network of DONET, PG1 and PG2. For the forecasting the first tsunami peak amplitude, the LLW model and the linear DSP model have similar performances. In the coastal PoIs of Murotomisaki, Awaji, and Abuyuki, the maximum amplitude forecasted by the linear DSP model tends to be closer to the simulation than that forecasted by the LLW model, though the differences are quite small. The main difference between the two models lies in the arrival time. In almost every PoI, especially Tosashimizu and Murotomisaki, the predicted waveform using the linear DSP model has a more accurate arrival time of the first tsunami peak.
Simulated waveforms (black lines) and waveforms forecasted by data assimilation at nine near-shore PoIs. The water depth of each PoI is provided. The forecasted waveforms are calculated by GFTDA using the LLW model (blue lines) and the linear DSP model (red lines). The time window of data assimilation is 20 min
Quantitative evaluation
To quantitatively evaluate the performance of two models in data assimilation, we calculate the tsunami forecast accuracy (Gusman et al. 2016) for different assimilation time windows. It is based on the geometric mean ratio (K) of the observed (Oi) and simulated (Si) maximum amplitude of the first tsunami peak for the ith station (Aida 1978),
$$\log \left( K \right) = \frac{1}{N}\mathop \sum \limits_{i = 1}^{N} { \log }\left( {\frac{{O_{i} }}{{S_{i} }}} \right),$$
$${\text{Accuracy}}\,\left( \% \right) = \frac{1}{K} \times 100\% \,\left( {K \ge 1} \right) \;{\text{or}}\; K \times 100\% \,(K < 1),$$
where N is the number of stations. Generally, a high accuracy value could indicate accurate forecasting of the tsunami data assimilation.
The accuracy for various assimilation time windows is plotted in Fig. 4a. The shapes of the forecasting accuracy curves for both the LLW model and the linear DSP model are quite similar. In the beginning, the time window is only 2 min, and soon after the earthquake, the accuracy of both models is very low. Because the first tsunami peak has not passed through any observation stations of our network, the data length used for assimilation is too short to provide accurate forecasts. Then, at 4 min, there is a sharp increase in the accuracy for both models, with the forecast accuracy exceeding 85% (Fig. 4). After that, the forecast accuracy varies slightly but exhibits a rising trend in general. Although there is not a large difference in forecast accuracy between the LLW model and the linear DSP model, the forecast accuracy of the LLW model is slightly higher. After the time window of 20 min, the first tsunami peak has already passed all of the observation stations. The forecast accuracy becomes saturated and stops increasing. Here, the values of forecasting accuracy calculated by both models are also similar, but the linear DSP model performs slightly better.
Forecast accuracy a and time lag b of the two models for various assimilation time windows. The forecast accuracy is used to evaluate the forecasted maximum amplitude of the first tsunami peak (Aida 1978). The time lag is used to examine the forecasted arrival time (Tsushima et al. 2012)
The difference in arrival time of the first tsunami peak between the two models is more evident. To quantitatively analyze the accuracy of the forecasted arrival time, we use the method of calculating time lag proposed by Tsushima et al. (2012). The time lag of the ith coastal station is defined as:
$$\Delta T_{i} = t_{i}^{S} - t_{i}^{O} ,$$
where t i O is the arrival time of the first tsunami peak observed at the ith station and t i S is the arrival time forecasted by data assimilation. A negative time lag indicates that the forecasted arrival time is earlier than the observation. A small absolute value of time lag indicates accurate forecasting of the arrival time. We calculate the time lag at all PoIs and calculate the average value.
Figure 4b clearly shows that the time lags calculated by the two models are clearly negative, which means that both the LLW model and the linear DSP model forecast the tsunami arrival times earlier than observed. Moreover, as the assimilation time window increases, the time lag becomes closer to zero. The tendency of the variation of time lag is the same as that of forecasting accuracy. As more observed data are used in data assimilation, the absolute value of time lag decreases quickly from the end of the 2-min time window to the 4-min time window. Then, it decreases slowly. After the time window of 20 min, the variation finally becomes very small. It is important to note that the difference between the LLW model and the linear DSP model is noticeable in the figure. The linear DSP model leads to a much smaller time lag than the LLW model, indicating that the linear DSP model performs better in forecasting tsunami arrival time.
In our application, we use two linear models, the LLW model and the linear DSP model. The accuracy of the forecasted maximum amplitude and arrival time depends on the length of the assimilation time window. It is important to choose a proper assimilation time window for both models. In our application to the 2004 off the Kii Peninsula earthquake, a time window of 14 min is a practical choice in order to produce a reliable and early tsunami forecast. At this time, the forecast accuracy of both the LLW and linear DSP models exceeds 96%, and the average time lag of tsunami arrival time is − 58.1 s (LLW) and − 25.7 s (DSP). Figure 2 shows that the first tsunami peak reaches the nearest PoI (Murotomisaki) around 35 min after the earthquake and reaches other PoIs more than 40 min after the earthquake. Using the tsunami height and arrival time of PoIs as input parameters, the shoreline tsunami height or inundation forecasts will be possible by applying the tsunami run-up models (Liu et al. 2009).
Because the calculation time in the GFTDA process is negligible, the tsunami forecast can be made 14 min after the earthquake based on the data assimilation. On the other hand, it is not practical to use a time window of more than 20 min. Though the forecasting accuracy becomes even higher and the time lag becomes even smaller, the tsunami forecast may not be useful if it is made shortly before the tsunami arrival.
The results also suggest that the tsunami propagation model could affect the accuracy of tsunami forecasting by data assimilation. For the maximum amplitude of the first tsunami peak, two models perform similarly. However, with respect to the arrival time, the linear DSP model has a better accuracy than the LLW model. The average time lag calculated by the linear DSP model is evidently smaller. For individual stations, if the station is located near Shikoku Island, which is close to the observation network, the difference in lag time is not so large. However, if the station is located near Kyushu Island which is approximately 200 km from the observation network, the difference in lag time becomes noticeable. This is caused by the limitation of the long-wave approximation. The limitations of the long-wave approximation may not be so evident when the wavelength of tsunami is sufficiently long. However, for the Kii Peninsula earthquake considered in this paper, the large dip results in short-wavelength components of the tsunami, and such an approximation would overestimate the velocity of tsunami propagation (Saito et al. 2010). Thus, the arrival time forecasted by the LLW model becomes quite earlier than the DSP model results, especially for PoIs far from the observation network, because a longer propagation distance will increase such errors.
Compared with the previous data assimilation method, the simple assimilation process of GFTDA enables us to apply more complicated models, apart from the LLW model. As long as linearity is assumed, the GFTDA is proven to be mathematically equivalent to the previous data assimilation approach (Wang et al. 2017). For potential tsunamis with more dispersion characteristics, or tsunamis that propagate over a longer distance from the observation network, the difference between dispersive and non-dispersive models is more important. Therefore, the linear DSP model would be able to make more improvements in forecasting the arrival time accurately and to mitigate the disasters of potential destructive tsunamis generated by outer-rise earthquakes.
DSP:
DONET:
Dense Oceanfloor Network System for Earthquakes and Tsunamis
EIC:
Earthquake Information Center
FDM:
finite difference method
GFTDA:
Green's function-based tsunami data assimilation
JAMSTEC:
Japan Agency for Marine-Earth Science and Technology
LLW:
linear long-wave
PoI:
S-net:
Seafloor Observation Network for Earthquakes and Tsunamis
Aida I (1978) Reliability of a tsunami source model derived from fault parameter. J Phys Earth 26(1):57–73
Baba T, Takahashi N, Kaneda Y et al (2015) Parallel implementation of dispersive tsunami wave modeling with a nesting algorithm for the 2011 Tohoku tsunami. Pure Appl Geophys 172(12):3455–3472. https://doi.org/10.1007/s00024-015-1049-2
Furumura T, Saito T (2009) Integrated ground motion and tsunami simulation for the 1944 Tonankai earthquake using high-performance supercomputers. J Disaster Res 4:118–126
Furumura T, Imai K, Maeda T (2011) A revised tsunami source model for the 1707 Hoei earthquake and simulation of tsunami inundation of Ryujin Lake, Kyushu, Japan. J Geophys Res 116:B02308. https://doi.org/10.1029/2010JB007918
Gusman AR, Murotani S, Satake K et al (2015) Fault slip distribution of the 2014 Iquique, Chile, earthquake estimated from ocean-wide tsunami waveforms and GPS data. Geophys Res Lett 42:1053–1060. https://doi.org/10.1002/2014GL062604
Gusman AR, Sheehan AF, Satake K et al (2016) Tsunami data assimilation of high-density offshore pressure gauges off Cascade from the 2012 Haida Gwaii earthquake. Geophys Res Lett 43(9):4189–4196. https://doi.org/10.1002/2016GL068368
Ho TC, Satake K, Watada S (2017) Improved phase corrections for transoceanic tsunami data in spatial and temporal source estimation: application to the 2011 Tohoku earthquake. J Geophys Res Solid Earth 122:10155–10175. https://doi.org/10.1002/2017JB015070
Kalnay E (2003) Atmospheric modeling, data assimilation and predictability. Cambridge University Press, Cambridge
Kaneda Y (2010) The advanced ocean floor real time monitoring system for mega thrust earthquakes and tsunamis-application of DONET and DONET2 data to seismological research and disaster mitigation. OCEAN 2013. https://doi.org/10.1109/OCEANS.2010.5664309
Liu PLF, Wang X, Salisbury AJ (2009) Tsunami hazard and early warning system in South China Sea. J Asian Earth Sci 36:2–12. https://doi.org/10.1016/j.jseaes.2008.12.010
Maeda T, Obara K, Shinohara M et al (2015) Successive estimation of a tsunami wavefield without earthquake source data: a data assimilation approach toward real-time tsunami forecasting. Geophys Res Lett 42:7923–7932. https://doi.org/10.1002/2015GL065588
Maeda T, Tsushima H, Furumura T (2016) An effective absorbing boundary condition for linear long-wave and linear dispersive-wave tsunami simulations. Earth Planets Space 68:63. https://doi.org/10.1186/s40623-016-0436-y
Marchenko AV, Morozov EG, Muzylev SV (2012) A tsunami wave recorded near a glacier front. Nat Hazard Earth Syst Sci 12:415–419. https://doi.org/10.5194/nhess-12-415-2012
Okada Y (1985) Surface deformation due to shear and tensile faults in a half-space. Bull Seismol Soc Am 75(4):1135–1154
Saito T, Furumura T (2009) Three-dimensional simulation of tsunami generation and propagation: application to intraplate events. J Geophys Res 114:B02307. https://doi.org/10.1029/2007JB005523
Saito T, Satake K, Furumura T (2010) Tsunami waveform inversion including dispersive waves: the 2004 earthquake off Kii Peninsula, Japan. J Geophys Res 115:B06363. https://doi.org/10.1029/2009JB006884
Saito T, Ito Y, Inazu D, Hino R (2011) Tsunami source of the 2011 Tohoku-Oki earthquake, Japan: inversion analysis based on dispersive tsunami simulations. Geophys Res Lett 38:L00G19. https://doi.org/10.1029/2011gl049089
Satake K (2015) Tsunamis. In: Schubert G (ed) Treatise on geophysics, vol 4, 2nd edn. Elsevier, Oxford, pp 477–504
Tanioka Y (2018) Tsunami simulation method assimilating ocean bottom pressure data near a tsunami source region. Pure Appl Geophys 175:721–729. https://doi.org/10.1007/s00024-017-1697-5
Tanioka Y, Satake K (1996) Tsunami generation by horizontal displacement of ocean bottom. Geophys Res Lett 23:861–864. https://doi.org/10.1029/96GL00736
Tsushima H, Hino R, Tanioka Y et al (2012) Tsunami waveform inversion incorporating permanent seafloor deformation and its application to tsunami forecasting. J Geophys Res 117:B03311. https://doi.org/10.1029/2011JB008877
Wang Y, Satake K, Maeda T, Gusman AR (2017) Green's function-based tsunami data assimilation (GFTDA): a fast data assimilation approach toward tsunami early warning. Geophys Res Lett 44:10282–10289. https://doi.org/10.1002/2017GL075307
Watada S, Kusumoto S, Satake K (2014) Traveltime delay and initial phase reversal of distant tsunamis coupled with the self-gravitating elastic Earth. J Geophys Res Solid Earth 119:4287–4310. https://doi.org/10.1002/2013JB010841
Weatherall P, Marks KM, Jakobsson M et al (2015) A new digital bathymetric model of the world's oceans. Earth Space Sci 2:331–345. https://doi.org/10.1002/2015EA000107
Yamanaka Y (2004) EIC seismology note. http://wwweic.eri.u-tokyo.ac.jp/sanchu/Seismo_Note/2004/EIC153.html. Accessed 22 Oct 2004
Zhou H, Wei Y, Titov VV (2012) Dispersive modeling of the 2009 Samoa tsunami. Geophys Res Lett 39:L16603. https://doi.org/10.1029/2012GL053068
YW conducted GFTDA and analyzed the different performance of two models. KS contributed to the tsunami propagation model. TM and AG contributed to the tsunami data assimilation code. All authors read and approved the final manuscript.
We thank the General Bathymetric Chart of the Ocean (GEBCO) for the bathymetric data. We would also like to thank Prof. Takashi Furumura for his suggestions regarding the dispersive tsunami model. We used the JAGURS tsunami simulation code (Baba et al. 2015; available at https://github.com/jagurs-admin/jagurs) and the TDAC data assimilation code (Maeda et al. 2015; Gusman et al. 2016; Wang et al. 2017; available at https://github.com/takuto-maeda/tdac).
The TDAC code for tsunami data assimilation (Maeda et al. 2015; Gusman et al. 2016; Wang et al. 2017) is available at https://github.com/takuto-maeda/tdac. The JAGURS code for computing Green's functions and synthetic tsunami (Baba et al. 2015) is available at https://github.com/jagurs-admin/jagurs.
This work was partially supported by KAKENHI 16H01838 (K. Satake) and KAKENHI 15K16306 (T. Maeda). Y. Wang thanks the Earthquake Research Institute for providing the funding for his Master's Program (Global Science Graduate Course).
Earthquake Research Institute, The University of Tokyo, Tokyo, Japan
Yuchen Wang
, Kenji Satake
, Takuto Maeda
& Aditya Riadi Gusman
Department of Earth and Planetary Science, Graduate School of Science, The University of Tokyo, Tokyo, Japan
Graduate School of Science and Technology, Hirosaki University, Aomori, Japan
Takuto Maeda
GNS Science, Lower Hutt, 5040, New Zealand
Aditya Riadi Gusman
Search for Yuchen Wang in:
Search for Kenji Satake in:
Search for Takuto Maeda in:
Search for Aditya Riadi Gusman in:
Correspondence to Yuchen Wang.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Wang, Y., Satake, K., Maeda, T. et al. Data assimilation with dispersive tsunami model: a test for the Nankai Trough. Earth Planets Space 70, 131 (2018) doi:10.1186/s40623-018-0905-6
Tsunami forecasting
Linear dispersive model
Linear long-wave model
4. Seismology | CommonCrawl |
\begin{definition}[Definition:Direct Image Mapping/Mapping]
Let $S$ and $T$ be sets.
Let $\powerset S$ and $\powerset T$ be their power sets.
Let $f \subseteq S \times T$ be a mapping from $S$ to $T$.
The '''direct image mapping''' of $f$ is the mapping $f^\to: \powerset S \to \powerset T$ that sends a subset $X \subseteq S$ to its image under $f$:
:$\forall X \in \powerset S: \map {f^\to} X = \begin {cases} \set {t \in T: \exists s \in X: \map f s = t} & : X \ne \O \\ \O & : X = \O \end {cases}$
\end{definition} | ProofWiki |
\begin{document}
\maketitle
\blfootnote{Most of the notations in this work have a link to their definitions. For example, if you click or tap on any instance of $\xast$, you will jump to the place where it is defined as the minimizer of the function we consider in this work.}
\begin{abstract}
It has recently been shown that \ISTA{}, an unaccelerated optimization method, presents sparse updates for the $\ell_1$-regularized personalized PageRank problem, leading to cheap iteration complexity
and providing the same guarantees as the approximate personalized PageRank algorithm ({\textnormal{\texttt{APPR}}}\xspace{}) \citep{fountoulakis2019variational}.
In this work, we design an accelerated optimization algorithm for this problem that also performs sparse updates, providing an affirmative answer to the COLT 2022 open question of \citet{fountoulakis2022open}.
Acceleration provides a reduced dependence on the condition number,
while the dependence on the sparsity in our updates differs from the \ISTA{} approach.
Further, we design another algorithm by using conjugate directions to achieve an exact solution while exploiting sparsity. Both algorithms lead to faster convergence for certain parameter regimes. Our findings apply beyond PageRank and work for any quadratic objective whose Hessian is a positive-definite $M$-matrix. \end{abstract}
\section{Introduction}\label{sec:introduction}
\emph{Graph clustering}, the process of dividing a graph into subclusters that are internally similar or connected in some application-specific sense \citep{schaeffer2007graph}, has been widely applied in various domains, including technical \citep{virtanen2003clustering, andersen2006local}, biological \citep{xu2002clustering, bader2003automated, boyer2005syntons}, and sociological \citep{newman2003properties, traud2012social} settings. With the advent of large-scale networks, traditional approaches that require access to the entire graph have become infeasible \citep{jeub2015think, leskovec2009community, fortunato2016community}. This trend has led to the development of \emph{local graph clustering algorithms}, which only visit a small subset of vertices of the graph \citep{andersen2006local, andersen2008algorithm, mahoney2012local, spielman2013local, kloster2014heat, orecchia2014flow, veldt2016simple, wang2017capacity, yin2017local, fountoulakis2019variational}.
At the heart of the study of these algorithms lies the \emph{approximate personalized PageRank algorithm} (\newtarget{def:acronym_approximate_personalized_page_rank}{{\textnormal{\texttt{APPR}}}\xspace{}}) \citep{andersen2006local}, which approximates the solution of the PageRank linear system \citep{page1999page} and rounds the approximate solution to find local partitions in a graph. The {\textnormal{\texttt{APPR}}}\xspace{} algorithm was introduced only from an algorithmic perspective, that is, its output is determined only algorithmically and not formulated as the solution to an optimization problem. Thus, quantifying the impact of heuristic modifications on the method is difficult, see, for example, \citep{gleich2014anti}.
Recently, \citet{fountoulakis2019variational} proposed a variational formulation of the local graph clustering problem as an $\ell_1$-regularized convex optimization problem, which they solved using the \emph{iterative shrinkage-thresholding algorithm} (\newtarget{def:acronym_ista}{{\textnormal{\texttt{ISTA}}}\xspace{}}) \citep{parikh2014proximal}. In this problem, {\textnormal{\texttt{ISTA}}}\xspace{} was shown to exhibit local behaviour, which leads to a running time that only depends on the nodes that are part of the solution and its neighbors, and is independent of the size of the graph. \citet{fountoulakis2022open} raised the open question of whether accelerated versions of the {\textnormal{\texttt{ISTA}}}\xspace{}-based approach or other acceleration techniques, for example, the \emph{fast iterative shrinkage-thresholding algorithm} (\newtarget{def:acronym_fista}{{\textnormal{\texttt{FISTA}}}\xspace{}}) \citep{parikh2014proximal}, or \emph{linear coupling} \citep{allenzhu2019nearly}, could lead to faster local graph clustering algorithms. In particular, {\textnormal{\texttt{ISTA}}}\xspace{} enjoys low per-iteration complexity since its iterates are at least as sparse as the solution, and the question is whether we can attain acceleration and reduce the dependence on the condition number on the computational complexity, while keeping sparse per-iteration updates.
\paragraph{Sparse Algorithms and Acceleration.} In this work, we answer the question in the affirmative. We first study the problem beyond acceleration and propose a method based on conjugate directions that optimizes exactly and is faster than {\textnormal{\texttt{ISTA}}}\xspace{} and our accelerated algorithm in some parameter regimes. Then, we show that we can implement an approximate version of the previous method by means of acceleration while performing sparse updates, which leads to faster convergence for ill-conditioned problems, among others. See \cref{table:comparisons:riemannian} for a summary of the complexities of our algorithms and of prior work, and see \cref{sec:algorithmic_comparisons} for a discussion comparing these complexities. Our algorithms sequentially determine the coordinates in the support of the solution. The main differences between the two approaches are that the conjugate-directions-based approach solves the problem in increasing subspaces exactly and requires to incorporate new coordinates one by one, while the accelerated algorithm solves this approximately and can add any number of new coordinates at a time. Beyond the PageRank problem, our algorithms apply generally to the quadratic problem $\min_{x\in\mathbb{R}_{\geq 0}^{\n}}\{\g({\mathbf{x}})\defi\innp{{\mathbf{x}}, \mathbb{Q}{\mathbf{x}}} - \innp{{\mathbf{b}}, {\mathbf{x}}}\}$, where $\mathbb{Q}$ is a symmetric positive-definite $M$-matrix.
\paragraph{Problem Structure.} The rates achieved with our two methods exploit improved geometric understanding of the $\ell_1$-regularized PageRank problem structure that we present. In particular, the $\ell_1$-regularized problem can be posed as a problem constrained to the positive orthant $\mathbb{R}^{\n}_{\geq 0}$. Based on this formulation, we characterize a region of points for which a negative gradient coordinate $i$ indicates $i$ is in the support $\suppast$ of the optimal solution $\xast$, provide sufficient conditions for finding points in this region with negative gradient coordinates, and show coordinatewise monotonicity of minimizers restricted to some relevant increasing subspaces, among other things.
\begin{table}[ht!]
\centering
\caption{Convergence rates of different algorithms exploiting sparsity for the $\ell_1$-regularized PageRank problem and other more general quadratic optimization problems with Hessian $\mathbb{Q}$, condition number $\L/\alpha$, $\suppast \defi \supp(\xast)$, $\vol(\suppast) \defi \nnz(\mathbb{Q}_{:,\suppast})$ and $\intvol(\suppast) \defi \nnz(\mathbb{Q}_{\suppast,\suppast})$. }
\label{table:comparisons:riemannian} \begin{tabular}{llc}
\toprule
\textbf{Method} & \textbf{Time complexity} & \textbf{Space complexity} \\
\midrule
\midrule
{\textnormal{\texttt{ISTA}}}\xspace{} \citep{fountoulakis2019variational} & $\bigotilde{\volast\frac{\L}{\alpha}}$ & $\bigo{\sparsity}$ \\
\midrule
{\textnormal{\texttt{CDPR}}}\xspace{} (\cref{alg:sparse_conjugate_directions}) & $\bigo{\sparsity^3 + \sparsity \vol(\suppast)}$ & $\bigo{\sparsity^2}$\\
\midrule
\aspr{} (\cref{alg:sparse_acceleration}) & $\bigotilde{\sparsity\intvol(\suppast)\sqrt{\frac{\L}{\alpha}} + \sparsity \volast}$ & $\bigo{\sparsity}$\\
\bottomrule \end{tabular} \end{table}
\subsection{Other Related Works}\label{sec:related_works}
Our solutions make use of first-order methods: accelerated projected gradient descent \citep{nesterov1998introductory} and the method of conjugate directions \citep{nocedal1999numerical}. First-order optimization methods are attractive in the high-dimensional regime, due to their fast per-iteration complexity in comparison to higher order methods. In the strongly convex and smooth case, accelerated gradient descent is an optimal first-order method \citep{nesterov1998introductory} and it improves over gradient descent by reducing the dependence on the condition number. Because of this reason, accelerated gradient descent is especially useful for ill-conditioned problems. A method related to the conjugate directions method is the conjugate gradients algorithm \citep{nocedal1999numerical}. Both of these conjugate methods can work in affine subspaces \citep{gower2014conjugate}, but to the best of our knowledge, it is not know how to provably use these algorithms with other kinds of constraints, see \citep{vollebregt2014bound} and references therein. For quadratic objectives, the conjugate gradient algorithm is also an accelerated method, and it belongs to the family of Krylov subspace methods, of which the generalized minimal residual method is an important example \citep{saad1986gmres}. In fact, the conjugate gradient algorithm was the inspiration for the first nearly-accelerated method for smooth convex optimization by \citet{nemirovski_bubeck}. Conjugate methods have been used to solve linear systems \citep{saad2003iterative} and although these methods are known to exploit the sparsity of the matrix, to the best of our knowledge there are no analyses of conjugate methods that exploit the sparsity of the solution.
For the $\ell_1$-regularized PageRank problem, \citet{hu2020local} demonstrated through numerical experiments that the updates generated by {\textnormal{\texttt{FISTA}}}\xspace{} do not exhibit the same level of sparsity as those produced by {\textnormal{\texttt{ISTA}}}\xspace{} for this type of problem. To the best of our knowledge, no other works have studied the open question raised by \citet{fountoulakis2022open}.
\subsection{Preliminaries}\label{sec:preliminaries}
In this section, we introduce some definitions and notation to be used in the rest of this work.
Throughout, let $\n\in\mathbb{N}$. We use $[\n] = \{1, 2, \dots, \n\}$. We use the big-$\mathcal{O}$ notation $\newtarget{def:big_o_tilde}{\bigotilde{\cdot}}$ to omit logarithmic factors. Let $\ensuremath{\mathbb{1}} \in\mathbb{R}^{\n}$ denote the all-ones vector. Denote the support of a vector ${\mathbf{x}}\in\mathbb{R}^{\n}$ by $\supp( {\mathbf{x}} )= \mathopen{}\mathclose\bgroup\originalleft\{i\in[\n] \mid x_i \neq 0 \aftergroup\egroup\originalright\}$ and define the projection of ${\mathbf{x}}\in \mathbb{R}^{\n}$ onto a convex subset $C\subseteq \mathbb{R}^{\n}$ by $\newtarget{def:projection_operator}{\proj{C}}({\mathbf{x}})=\argmin_{{\mathbf{y}}\in C} \norm{{\mathbf{x}} - {\mathbf{y}}}_2$. For $i\in[\n]$, we use $\newtarget{def:vector_of_canonical_basis}{\canonical[i]}\in\mathbb{R}^{\n}$ to denote the $i$-th unit vector and $\newtarget{def:simplex}{\simplex{\n}}$ to denote the $\n$-dimensional simplex. For $S\subseteq [\n]$, and a function $f\colon \mathbb{R}^{\n} \to \mathbb{R}$, let $\nabla_S f({\mathbf{x}})$ be the vector containing $(\nabla_i f({\mathbf{x}}))_{i\in S}$ sorted by index. Throughout, $\newtarget{def:symm_pos_def_M_matrix_Q}{\mathbb{Q}} \in \mathcal{M}_{\newtarget{def:dimension}{\n}\times \n}(\mathbb{R})$ is always a positive-definite matrix with non-positive off-diagonal entries, that is, an $M$-matrix such that $\mathbb{Q} \succ 0$.
In this work, for one matrix $\mathbb{Q}$ of the form above and a vector ${\mathbf{b}}\in\mathbb{R}^{\n}$, we study the optimization of a quadratic of the form $\newtarget{def:function_g_constrained_version_of_l1_reg_PageRank}{\g}({\mathbf{x}}) \defi \innp{{\mathbf{x}}, \mathbb{Q}{\mathbf{x}}} - \innp{{\mathbf{b}}, {\mathbf{x}}}$ constrained to the positive orthant $\mathbb{R}_{\geq 0}^{\n}$. Without loss of generality, we can thus assume that $\mathbb{Q}$ is symmetric. By strong convexity, the solution is unique. In the sequel, we focus on optimization algorithms for this problem whose iterates always have support contained in the support of the optimal solution $\newtarget{def:optimizer}{\xast}=\argmin_{{\mathbf{x}}\in\mathbb{R}^n_{\geq 0}} \g({\mathbf{x}})$. We define $\newtarget{def:support_of_the_solution}{\suppast} \defi \supp(\xast)$. We refer to coordinates $i \in [\n]$ as good if $i \in \suppast$, and as bad otherwise. We denote by $\newtarget{def:smoothness_constant}{\L}$ and $\newtarget{def:strong_convexity_of_g}{\alpha}$ upper and lower bounds on the eigenvalues of $\mathbb{Q}$, that is, smoothness and strong convexity constants of $\g$ defined as above, respectively. In short, we have $0 \prec \alpha \I \preccurlyeq \nabla^2 \g({\mathbf{x}}) \preccurlyeq \L \I$, for ${\mathbf{x}} \in \mathbb{R}^{\n}$.
Throughout, $\newtarget{def:graph_G}{\G} = (\mathbb{V}, \edges)$ is a graph with vertex and edge sets $\newtarget{def:vertices_of_graph}{\mathbb{V}}$ and $\newtarget{def:edges_of_graph}{\edges}$, respectively. We assume that $\card{\mathbb{V}} = \n$, that is, $\G$ consists of $\n$ vertices. Given two vertices $i, j\in [\n]$, $i\newtarget{def:neighbor_in_graph}{\neigh} j$ denotes that they are neighbours. For $S\subseteq \mathbb{V}$, $i\neigh S$ indicates that $i$ is the neighbour of at least one node in $S$. As we describe in the next section, in PageRank problems, the matrix $\mathbb{Q}$ corresponds to a combination of the Lagrangian of a graph and the identity matrix $\newtarget{def:identity_matrix}{\I}$. For a subset of vertices $S\subseteq \mathbb{V}$, we formally define the volume of $S$ as $\newtarget{def:volume}{\vol}(S) = \sum_{i\in S} d_i + \card{S}$, that is, as the sum of the degrees of vertices in $S$, plus $\card{S}$, to account for the regularization, that presents a similar effect to lazyfying the walk given by the graph. Similarly, we formally define the internal volume of $S$ as $\newtarget{def:internal_volume}{\intvol}(S) \defi \card{S} + \sum_{(i,j)\in \edges} \mathbf{1}_{\{i,j\in S\}} $, that is, as the sum of edges of the subgraph induced by $S$, plus $\card{S}$, to account for the regularization. This definition corresponds to $\vol(S) = \nnz(\mathbb{Q}_{:,\suppast})$ and $\intvol(S) = \nnz(\mathbb{Q}_{\suppast,\suppast})$, where $\newtarget{def:number_of_non_zeros}{\nnz}(\cdot)$ refers the number of non-zeros of a matrix, $\mathbb{Q}_{:,\suppast}$ refers to the columns of $\mathbb{Q}$ indexed by $\suppast$ and $\mathbb{Q}_{\suppast,\suppast}$ to the submatrix with entries $\mathbb{Q}_{i,j}$ for $i,j\in\suppast$. This is the formal definition of $\vol(\cdot)$ and $\intvol(\cdot)$ that we use when working with a general $M$-matrix $\mathbb{Q}$. The complexity of our results depends on $\vol(\suppast)$ and $\intvol(\suppast)$. \citet{fountoulakis2019variational} showed that for the $\ell_1$-regularized PageRank problem it is $\sum_{i\in \suppast} d_i \leq \frac{1}{\rho}$ and therefore $\vol(\suppast) \leq \frac{1}{\rho} + \sparsity$, where $\rho$ is the regularization parameter of the problem, see for example \eqref{eq:old_opt}.
\section[Personalized PageRank with l1-Regularization]{Personalized PageRank with $\ell_1$-Regularization}\label{sec:personalizedpagerank}
In this section, we introduce the PageRank problem that we study in this work, and we recall the variational formulation due to \citet{fountoulakis2019variational}. Let $\G = (\mathbb{V}, \edges)$ be a connected undirected graph with $\n$ vertices. We note that there are techniques to reduce an unconnected PageRank problem to a connected one, see for example \citet{eiron2004ranking}. Denote the adjacency matrix of $\G$ by $\newtarget{def:adjacency_matrix}{\A}$, that is, $\A_{i,j} = 1$ if $i\neigh j$ and $0$ otherwise. Let $\newtarget{def:diagonal_degree_matrix}{\mathcal{D}} \defi\operatorname{diag}(d_1, \dots, d_n)$ be the matrix with the degrees $\{d_i\}_{i=1}^{\n}$ in its diagonal. For $\alpha \in ]0, 1[$, consider the matrix \begin{align}\label{eq:Q}
\mathbb{Q} = \mathcal{D}^{-1/2} \mathopen{}\mathclose\bgroup\originalleft(\mathcal{D} - \frac{1-\alpha}{2} (\mathcal{D} + \A)\aftergroup\egroup\originalright)\mathcal{D}^{-1/2} = \alpha \I + \frac{1-\alpha}{2}\lapl \succ 0, \end{align} where $\newtarget{def:laplacian_matrix}{\lapl} \defi \I - \mathcal{D}^{-1/2}\A\mathcal{D}^{-1/2}$ is the symmetric normalized Laplacian matrix, which is known to be symmetric and satisfies $0 \prec \lapl \preccurlyeq 2 \I$ \citep{butler2006spectral}, hence the positive definiteness of $\mathbb{Q}$. In fact, by construction, $0 \prec \alpha \I \preccurlyeq \mathbb{Q} \preccurlyeq \L \I$, for $\L = 1$. Note that $\mathbb{Q}_{i,j} \leq 0$ for $i\neq j$, so indeed $\mathbb{Q}$ is a positive definite $M$-matrix, which is what our algorithms require.
Next, given a distribution $\newtarget{def:personalized_distribution}{{\mathbf{s}}} \in \simplex{\n}$ over the nodes of the graph $\G$, called teleportation distribution, the personalized PageRank problem consists of optimizing the objective $
\newtarget{def:function_f_PageRank_objective}{\f}({\mathbf{x}}) \defi \frac{1}{2} \innp{ {\mathbf{x}} , \mathbb{Q} {\mathbf{x}} } - \alpha \innp{ {\mathbf{s}}, \mathcal{D}^{-1/2} {\mathbf{x}} }. $ It holds that $
\nabla \f({\mathbf{x}}) = \mathbb{Q} {\mathbf{x}} - \alpha \mathcal{D}^{-1/2}{\mathbf{s}}, $ $\nabla^2\f({\mathbf{x}}) = \mathbb{Q}$, and, thus, $\f$ is $\alpha$-strongly convex and $\L$-smooth. For $\newtarget{def:weight_in_l1_penalty}{\rho} > 0$, we are interested in the optimization of the $\ell_1$-regularized problem \begin{align}\label{eq:old_opt}
\min_{{\mathbf{x}} \in \mathbb{R}^{\n}} \f({\mathbf{x}}) + \alpha\rho \norm{\mathcal{D}^{1/2} {\mathbf{x}}}_1. \end{align} Solving \eqref{eq:old_opt} yields the same guarantees as {\textnormal{\texttt{APPR}}}\xspace{}, see \citet{fountoulakis2019variational}. The advantage of the variational formulation \eqref{eq:old_opt} is that it allows to address the problem from an optimization perspective, as opposed to the algorithmic one of {\textnormal{\texttt{APPR}}}\xspace{}, see \citet{andersen2006local}. Due to the strong convexity of the objective, \eqref{eq:old_opt} has a unique minimizer $\xast$. \citet{fountoulakis2019variational} proved that $\xast \geq \ensuremath{\mathbb{0}}$, which implies the following optimality conditions for \eqref{eq:old_opt} and $i\in [n]$: \begin{align}\label{eq:old_optimality_conditions}
\nabla_i \f(\xast) = -\alpha\rho d_i^{1/2} \ \ \text{ if } \ \ \xast[i] > 0 \quad\quad\text{ and }\quad\quad \nabla_i \f(\xast)\in [-\alpha\rho d_i^{1/2}, 0]\ \ \text{ if } \ \ \xast[i] = 0. \end{align} Letting \begin{align} \label{eq:g}
\g({\mathbf{x}}) \defi \f({\mathbf{x}}) + \alpha\rho \innp{ \ensuremath{\mathbb{1}}, \mathcal{D}^{1/2}{\mathbf{x}}} = \frac{1}{2}\innp{{\mathbf{x}}, \mathbb{Q}{\mathbf{x}}} + \alpha \innp{ {\mathbf{s}} + \rho\ensuremath{\mathbb{1}}, \mathcal{D}^{-1/2} {\mathbf{x}} }, \end{align} the optimality conditions for $
\argmin_{{\mathbf{x}}\in\mathbb{R}^n_{\geq 0}}\g({\mathbf{x}}) $ are equivalent to \eqref{eq:old_optimality_conditions}, that is, to the optimality conditions of Problem \eqref{eq:old_opt} and we have \begin{align}\label{eq:opt_equivalent}
\min_{{\mathbf{x}}\in \mathbb{R}^{\n}}\f({\mathbf{x}}) + \alpha\rho \norm{\mathcal{D}^{1/2} {\mathbf{x}}}_1
= \min_{{\mathbf{x}}\in\mathbb{R}^n_{\geq 0}}\g({\mathbf{x}}). \end{align} Put differently, at $\xast=\argmin_{{\mathbf{x}}\in\mathbb{R}^n_{\geq 0}}\g({\mathbf{x}})$, the following optimality conditions hold for $i\in[n]$: \begin{align}\label{eq:new_optimality_conditions}
\quad\quad \nabla_i \g (\xast) = 0\ \ \text{ if } \ \ \xast[i] > 0 \quad\quad \text{ and }\quad\quad \nabla_i \g (\xast) \in [0, \alpha\rho d_i^{1/2} ], \ \ \text{ if } \ \ \xast[i] = 0. \end{align} The algorithms presented in this work apply in particular to the minimization of $\g$ defined in \eqref{eq:g}.
\subsection[Projected Gradient Descent (PGD)]{Projected Gradient Descent (\pgd{})}\label{sec:pgd}
\begin{algorithm}
\caption{Projected gradient descent (\pgd{})}\label{alg:pgd} \begin{algorithmic}[1]
\REQUIRE Closed and convex set $ C\subseteq \mathbb{R}^{\n}$, initial point $\xt[0] \in C$, $f\colon C \to \mathbb{R}$ an $\alpha$-strongly convex and $\L$-smooth function, and $\mathbb{T}\in \mathbb{N}$.
\ENSURE $\xt[\mathbb{T}]\in C$.
\hrule
\FOR {$t= 0, 1, \ldots, \mathbb{T}-1$}
\State $\xt[t+1] \gets \proj{C}\mathopen{}\mathclose\bgroup\originalleft(\xt[t] - \frac{1}{\L}\nabla f(\xt[t])\aftergroup\egroup\originalright)$
\ENDFOR \end{algorithmic} \end{algorithm}
\citet{fountoulakis2019variational} tackled Problem \eqref{eq:old_opt} by applying {\textnormal{\texttt{ISTA}}}\xspace{} to it, initialized at $\ensuremath{\mathbb{0}}$, and they showed that each iterate $\x[t]$ of the algorithm satisfies $\x[t] \geq \ensuremath{\mathbb{0}}$. Given $\x[t-1]$, the update rule of {\textnormal{\texttt{ISTA}}}\xspace{} defines the next iterate as $
\x[t] \defi \argmin_{{\mathbf{x}} \in\mathbb{R}^{\n}} \rho\alpha\norm{\mathcal{D}^{1/2}{\mathbf{x}}}_1 + \frac{1}{2}\norm{ {\mathbf{x}}- (\x[t-1]-\nabla_i \f (\x[t-1]))}_2^2 = \argmin_{{\mathbf{x}}\in\mathbb{R}_{\geq 0}^{\n}} \frac{1}{2}\norm{{\mathbf{x}}- (\x[t]-\nabla \g (\x[t]))}^2, $ where the equality follows directly by checking each coordinate, since the problems are separable. We note that the right hand side is the optimization problem that defines \pgd{} for $\g$ in $\mathbb{R}_{\geq 0}^{\n}$ . We present projected gradient descent (\newtarget{def:acronym_projected_gradient_descent}{\pgd{}}) in \cref{alg:pgd}, which will be useful to our analysis. None of our algorithms for addressing \eqref{eq:new_optimality_conditions} run \pgd{} as a subroutine. The application of \pgd{} to the set $C\subseteq \mathbb{R}^{\n}$, initial point $\xt[0]\in C$, objective $f\colon C \to \mathbb{R}$, and number of iterations $\mathbb{T}\in\mathbb{N}$ is denoted by $\xt[\mathbb{T}] = \pgd{}( C, \xt[0], f, \mathbb{T})$.
\begin{fact}[Convergence rate of \pgd{}]\label{thm:pgd} Let $ C\subseteq \mathbb{R}^{\n}$ be a closed convex set, $\xt[0] \in C$, and $f\colon C\to\mathbb{R}$ an $\alpha$-strongly convex and $\L$-smooth function with minimizer at $\xast$. Then, for the iterates of \cref{alg:pgd}, it holds that $
\norm{\x[t] - \xxast}_2^2\leq \mathopen{}\mathclose\bgroup\originalleft(1 - \frac{1}{\kappa}\aftergroup\egroup\originalright)^t \norm{\x[0] - \xxast}_2^2, $ where $\kappa\defi\frac{\L}{\alpha}$. See \citet[Theorem~2.2.8]{nesterov1998introductory} for a proof. \end{fact}
\subsection{Geometrical Understanding of the Problem Setting}\label{sec:geometry}
\citet{fountoulakis2019variational} proved that for their method, the iterates $\x[t]$ never decrease coordinate-wise, and they concluded $\xast \in \mathbb{R}_{\geq 0}^{\n}$ as a consequence of this fact and the convergence guarantees of {\textnormal{\texttt{ISTA}}}\xspace{}: $\ensuremath{\mathbb{0}} \leq \x[1] \leq \dots \leq \x[t] \leq \x[t+1]\to \xast$. We generalize this result proven for the iterates of {\textnormal{\texttt{ISTA}}}\xspace{} to several geometric statements on the problem. This result holds in a more general setting, namely a quadratic with a positive-definite $M$-matrix as Hessian. The proof illustrates the geometry of the problem and we include it below. For any point ${\mathbf{x}}$ such that $x_i = 0$ if $i\not\in\suppast$ and $\nabla_{i} \g({\mathbf{x}}) \leq 0$ if $i\in\suppast$, we have that ${\mathbf{x}} \leq \xast$, among other things.
\begin{proposition}\label{proposition:pgd_helper}
Let $\g$ be as in \eqref{eq:g} and let $S \subseteq [\n]$ be a set of indices such that we have a point $\xt[0]\in\mathbb{R}_{\geq 0}^{\n}$ with $\xt[0][i] = 0$ if $i\not\in S$ and $\nabla_i \g(\xt[0])\leq 0$ if $i\in S$. Let $C\defi\spann{\{\canonical[i] \mid i \in S \}} \cap \mathbb{R}_{\geq 0}^{\n}$, $\xtast[C] \defi \argmin_{{\mathbf{x}}\in C} \g({\mathbf{x}})$ and $\xast \defi \argmin_{{\mathbf{x}}\in\mathbb{R}_{\geq 0}^{\n}} \g({\mathbf{x}})$. Then: \begin{enumerate}
\item \label{property:pgd_helper_monotone} It holds that $\xt[0]\leq \xtast[C]$ and $\nabla_i \g(\xtast[C]) = 0$ for all $i \in S$.
\item \label{property:pgd_helper_positivity} If for $i\in S$, we have $x^{(0)}_i > 0$ or $\nabla_i \g(\xt[0]) < 0$, then $\xtast[C][i] > 0$.
\item \label{property:pgd_helper_subset} If $\xtast[C][i] > 0$ for all $i \in S$, we have $\xtast[C] \leq \xast$ and therefore $S \subseteq \suppast$. \end{enumerate} \end{proposition}
\begin{proof} First, by definition of $C$, for all ${\mathbf{x}} \in C$, we have $x_i = 0$ if $i\not\in S$.
Let $\{\xt[t]\}_{t=0}^\infty$ be the sequence of iterates created by \pgd{}$(C,\xt[0], \g, \cdot)$ when the algorithm is run for infinitely many iterations. We first prove that for all $t \geq 0$ and for all $i \in S$, we have $\nabla_i \g(\xt[t])\leq 0$. It holds for $t=0$ by assumption. If we assume it holds for some $t\geq 0$, then we have
\begin{equation}\label{eq:aux:pgd_update_computation}
x^{(t+1)}_i = x^{(t)}_i - \frac{1}{\L}\nabla_i \g (\xt[t]) \geq x^{(t)}_i
\end{equation}
for all $i \in S$, that is, the points do not decrease coordinatewise. Let the function $\newtarget{def:function_g_restricted_to_subspace}{\gbar}$ be $\g$ restricted to $\spann{\{\canonical[i] \mid i \in S\}}$ and note $\nabla_i \g({\mathbf{x}}) = \nabla_i \gbar({\mathbf{x}})$ for $i \in S$. The function $\gbar$ is a quadratic with Hessian $\mathbb{Q}_{S,S}$, that is, it is formed by $\mathbb{Q}_{i,j}$ for $i,j \in S$. Quadratics have affine gradients and so we have by \eqref{eq:aux:pgd_update_computation} that $\nabla \gbar(\xt[t+1]) = \nabla \gbar(\xt[t]) - \frac{1}{\L}\mathbb{Q}_{S,S}\nabla \gbar(\xt[t]) \leq 0$, where the last inequality is due to the assumption $\nabla \gbar(\xt[t]) \leq 0$, and $(\I-\frac{1}{\L}\mathbb{Q}_{S,S})_{i,j} \geq 0$ for all $i, j \in S$. The latter holds because for $i, j \in S$, $i\neq j$, we have $\mathbb{Q}_{i,j} \leq 0$ and due to smoothness, it is $\mathbb{Q}_{i, i} = e_i^\intercal \mathbb{Q} e_i \leq \L$.
Thus, by induction, for all $t\in \mathbb{N}$ and $i \in S$, we have $\nabla_ig(\xt[t])\leq 0$. This has two consequences. Firstly, $\xt[0]\leq \xt[1] \leq \dots $, and so $\xt[0] \leq \xtast[C]$ since the iterates of \pgd{} converge to $\xtast[C]$, by \cref{thm:pgd}. Secondly, using the limit and continuity of $\nabla \g(\cdot)$, it is $\nabla_i \g(\xtast[C]) \leq 0$ for $i\in S$. This fact and the optimality of $\xtast[C]$ imply $\nabla_ig(\xtast[C]) = 0$ for all $i\in S$, proving the first statement.
For the second stament, fix $i \in S$. Note that by the assumption and the update rule $x^{(t+1)}_i = x^{(t)}_i - \frac{1}{\L}\nabla_i \g (\xt[t])$, it holds that $x^{(1)}_i > 0 $, and thus, since $\xtast[C][i] \geq \xt[1][i]$, we have $\xtast[C][i] > 0$.
For the third statement, we sequentially apply the first one to obtain optimizers in increasing subspaces, until we reach $\xast$, while showing they do not decrease coordinatewise. Suppose that $\xtast[C][i] > 0$ for all $i\in S$. If $\xtast[C]= \xast$, the statement holds. Thus, we assume that $\xtast[C]\neq \xast$.
In that case, for $k \in \mathbb{N}$, define the optimizer ${\mathbf{y}}^{(*, k)} \defi \argmin_{{\mathbf{y}}\in B^{(k-1)}}\g({\mathbf{y}})$ with respect to the set $B^{(k-1)} \defi \spann{\{\canonical[i] \mid i \in R^{(k-1)}\}} \cap \mathbb{R}^n_{\geq 0}$, where $R^{(k-1)} \defi R^{(k-2)} \cup N^{(k-1)}$ for $k>0$ and $R^{(-1)} \defi S$, and where $N^{(k-1)}\defi \{i\in[\n] \mid y^{(*,k-1)}_i = 0, \nabla_i \g({\mathbf{y}}^{(*,k-1)}) < 0\}$.
By \cref{property:pgd_helper_monotone}, it holds that $\xtast[C] = {\mathbf{y}}^{(*,0)} \leq \ldots \leq {\mathbf{y}}^{(*,k)}$ and $\nabla_ig({\mathbf{y}}^{(*,k)}) = 0$ for all $i\in R^{(k-1)}$ and $k\in\mathbb{N}$.
Let $K \in \mathbb{N}$ denote the first iteration for which $R^{(K)} = R^{(K-1)}$, or, equivalently $N^{(K)} = \emptyset$. The existence of such a $K$ is guaranteed because otherwise $R^{(k)}\subset R^{(k+1)}$ for all $k\in \mathbb{N}$, but necessarily it is $|R^{(k)}| \leq \n$.
Thus, $\nabla_ig({\mathbf{y}}^{(*,K)}) = 0$ for all $i\in R^{(K-1)}$ and $\nabla_ig({\mathbf{y}}^{(*,K)}) \geq 0$ for all $i\not\in R^{(K-1)}$. In summary, ${\mathbf{y}}^{(*, K)} $ satisfies the optimality conditions of the problem $\min_{{\mathbf{x}}\in\mathbb{R}_{\geq 0}^{\n}} \g({\mathbf{x}})$, implying that ${\mathbf{y}}^{(*,K)} = \xast$. Since $S= R^{(-1)} \subseteq R^{(K)}$, \cref{property:pgd_helper_subset} holds. \end{proof}
\subsection{Algorithmic Intuition}\label{sec:intuition}
In this section, we present the high-level idea of our algorithms for addressing \eqref{eq:opt_equivalent}. The core idea behind them is to start with the set of known good indices $ \Sinit=\emptyset$ and iteratively expand it, $\Sinit \subsetneq \St[0] \subsetneq \ldots \subsetneq \St[\mathbb{T}]$, until we have $\St[\mathbb{T}]=\suppast$ or we find an $\newtarget{def:accuracy_epsilon}{\epsilon}$-minimizer of \eqref{eq:opt_equivalent}. For $t\in\{0,1,\ldots, \mathbb{T}\}$, to determine elements $i\in \suppast\setminus \St[t-1]$, we let \begin{align}\label{eq:algs_solve_this}
\newtarget{def:optimizer_in_subspace}{\xtast[t]} =\argmin_{{\mathbf{x}}\in \Ct[t-1]} \g({\mathbf{x}}), \end{align} where $\Ct[t-1] \defi \spann{\{\canonical[i] \mid i \in \St[t-1]\}} \cap \mathbb{R}^n_{\geq 0}$. By an argument following \cref{proposition:pgd_helper} that we will detail later, $\nabla_ig(\xtast[t]) < 0$ for at least one $i\in \suppast\setminus \St[t-1]$ and $\nabla_jg(\xtast[t]) \geq 0$ for all $j \not\in \suppast$. This observation motivates the following procedure: At iteration $t\in\{0,1,\ldots, \mathbb{T}-1\}$, construct $\xtast[t]$, check if $N^{(t)} = \{i\in[\n]\mid \nabla_i \g(\xtast[t]) < 0\}$ is not empty, and, in such a case, set $\St[t] \subseteq \St[t-1] \cup N^{(t)}$ and repeat the procedure. Should it ever happen that $N^{(t)}=\emptyset$, then we have $\xtast[t] = \xast$, that is, we found the optimal solution to \eqref{eq:opt_equivalent} and the algorithm can be terminated. When using conjugate directions as the optimization algorithm for constructing \eqref{eq:algs_solve_this}, and when only incorporating good coordinates one by one, we obtain \cref{alg:sparse_conjugate_directions} (\newtarget{def:acronym_conjugate_directions_pagerank}{{\textnormal{\texttt{CDPR}}}\xspace{}}), see \cref{sec:cdappr}. For our second algorithm, \cref{alg:sparse_acceleration} (\aspr{}), we use accelerated projected gradient descent to construct only an approximation of \eqref{eq:algs_solve_this} and we show that this method still allows us to proceed. We discuss the subtleties arising from using an approximation algorithm in \cref{sec:apgdappr}.
\section{Conjugate Directions for PageRank}\label{sec:cdappr} { \renewcommand\mathbb{T}{\newlink{def:final_iteration_T_of_conjugate_directions}{T}}
\let\oldxtast\xtast \let\oldxt\xt \let\oldSt\St \let\oldCt\Ct \let\oldSinit\Sinit
\let\xtast\undefined \let\xt\undefined \let\St\undefined \let\Ct\undefined \let\Sinit\undefined
\NewDocumentCommand{\xtast}{oo}{
\newlink{def:optimizer_in_subspace_for_CDPR}{
\IfNoValueTF{#1}
{{\mathbf{x}}^{(\ast, t)}}
{
\IfNoValueTF{#2}
{{\mathbf{x}}^{(\ast, #1)}}
{x^{(\ast, #1)}_{#2}}
}
} }
\NewDocumentCommand{\xt}{oo}{
\newlink{def:iterate_xt_of_CDPR}{
\IfNoValueTF{#1}{
{{\mathbf{x}}^{(t)}}
}
{
\IfNoValueTF{#2}
{{\mathbf{x}}^{(#1)}}
{x^{(#1)}_{#2}}
}
} }
\newcommand\Sinit{\newlink{def:initial_set_of_known_good_coordinates_CDPR}{S^{( -1 )}}} \newcommandx*\St[1][1=t, usedefault]{\newlink{def:set_of_known_good_coordinates_CDPR}{S^{( #1 )}}} \newcommandx*\Ct[1][1=t, usedefault]{\newlink{def:span_of_known_good_coordinates_in_Rp_CDPR}{C^{( #1 )}}}
\begin{algorithm}
\caption{Conjugate directions PageRank algorithm ({\textnormal{\texttt{CDPR}}}\xspace{})} \label{alg:sparse_conjugate_directions} \begin{algorithmic}[1]
\REQUIRE Quadratic function $g:\mathbb{R}^{\n}\to \mathbb{R}$ with Hessian $\mathbb{Q} \succ 0$ being a symmetric $M$-matrix. The $\ell_1$-regularized PageRank problem corresponds to choosing $\g$ as in \eqref{eq:g}.
\ENSURE $\xt[\mathbb{T}] =\argmin_{{\mathbf{x}}\in\mathbb{R}_{\geq 0}^{\n}} \g({\mathbf{x}})$, where $\newtarget{def:final_iteration_T_of_conjugate_directions}{\mathbb{T}} \in \mathbb{N}$ is the first iteration for which $N^{(\mathbb{T})} = \emptyset$.
\hrule
\State $t\gets 0$
\State $\xt[t]\gets \ensuremath{\mathbb{0}}$
\State $N^{(t)} \gets \mathopen{}\mathclose\bgroup\originalleft\{i\in[\n] \mid \nabla_i \g(\xt[t]) < 0\aftergroup\egroup\originalright\}$
\WHILE{$N^{(t)} \neq \emptyset$}
\State $i^{(t)} \in N^{(t)}$
\State $\newtarget{def:initial_basis_ut_in_algorithm}{\ut[t]} \gets \nabla_{i^{(t)}}\g(\xt[t])\cdot \canonical[i^{(t)}]$
\State $\beta_k^{(t)} \gets \nabla_{i^{(t)}}\g(\xt[t])\innp{\mathbb{Q}_{i^{(t)},:}, \dtbar[k]} $ for all $k = 0, \dots, t-1$ \Comment{equal to $-\frac{\innp{\ut[t],\mathbb{Q}\dt[k]}}{\innp{ \dt[k], \mathbb{Q}\dt[k]}}$}
\State $\newtarget{def:directions_dt_in_conjugate_directions}{\dt[t]} \gets \ut[t] + \sum_{k=0}^{t-1} \beta_k^{(t)} \dt[k]$\Comment{store this sparse vector}
\State $\newtarget{def:stored_normalized_directions_dt_in_conjugate_directions}{\dtbar[t]} \gets \frac{ \dt[t]}{\innp{ \dt[t], \mathbb{Q} \dt[t]}} $ \Comment{store $\innp{ \dt[t], \mathbb{Q} \dt[t]}$}
\State $\eta^{(t)}\gets -\innp{\nabla \g(\xt[t]), \dtbar[t]}$
\State $\newtarget{def:iterate_xt_of_CDPR}{\xt[t+1]}\gets \xt[t] + \eta^{(t)}\dt[t]$
\State $N^{(t+1)} \gets \mathopen{}\mathclose\bgroup\originalleft\{i\in[\n] \mid \nabla_i \g(\xt[t+1]) < 0\aftergroup\egroup\originalright\}$
\State $t\gets t +1$
\ENDWHILE \end{algorithmic} \end{algorithm}
With the geometric properties of the problem we established in \cref{sec:personalizedpagerank}, we are ready to introduce the conjugate directions PageRank algorithm ({\textnormal{\texttt{CDPR}}}\xspace{}) \cref{alg:sparse_conjugate_directions}, a \emph{conjugate-directions}-based approach for addressing \eqref{eq:opt_equivalent}, which outperforms the {\textnormal{\texttt{ISTA}}}\xspace{}-solver due to \citet{fountoulakis2019variational} in certain parameter regimes. {\textnormal{\texttt{CDPR}}}\xspace{} is based on the algorithmic blueprint outlined in \cref{sec:intuition} and constructs $\xtast[\mathbb{T}]$ as in \eqref{eq:algs_solve_this} using conjugate directions. As we will prove formally, it is $\ensuremath{\mathbb{0}} \leq \xtast[t]$ for all $t\in\{0,1,\ldots, \mathbb{T}\}$, allowing us to solve the constrained problem \eqref{eq:algs_solve_this} by dropping the non-negativity constraints and using the method of conjugate directions. This is an important point, since this method is designed for affine spaces only and, to the best of our knowledge, cannot deal with other constraints. Conjugate directions are an attractive mechanism for finding \eqref{eq:algs_solve_this}, as it allows to exploit the sparsity of the solution, is exact, and does not rely on the strong convexity of the objective, leading to a time complexity independent of $\alpha$. Note that, even though we may learn about several new good coordinates at the end of an iteration, in order to maintain the invariants required for our {\textnormal{\texttt{CDPR}}}\xspace{}, we can add at most one new coordinate to $\St[t]$ at a time. This algorithm requires more memory than the {\textnormal{\texttt{ISTA}}}\xspace{}-solver of \citet{fountoulakis2019variational} and \aspr{}, which is due to storing an increasing $\mathbb{Q}$-orthogonal basis that is required to perform exact optimization over $\Ct[t]$ by performing Gram-Schmidt with respect to $\mathbb{Q}$.
\cref{alg:sparse_conjugate_directions} works in the following way. Initialize with $\newtarget{def:initial_set_of_known_good_coordinates_CDPR}{\Sinit} \defi \emptyset$, and $\xtast[0]\defi \ensuremath{\mathbb{0}}$. For $t \in \{0, 1,\ldots, \mathbb{T}\}$, let the set of known good coordinates be $\newtarget{def:set_of_known_good_coordinates_CDPR}{\St[t]}\defi \St[t-1] \cup \{i^{(t)}\}$, and define $\newtarget{def:span_of_known_good_coordinates_in_Rp_CDPR}{\Ct[t]} \defi \spann{\{\canonical[i] \mid i \in \St[t] \}} \cap \mathbb{R}^n_{\geq 0}$, and $\newtarget{def:optimizer_in_subspace_for_CDPR}{\xtast[t]} \defi\argmin_{{\mathbf{x}}\in C^{(t-1)}} \g({\mathbf{x}})$. At each iteration $t \in \{0, 1, \ldots, \mathbb{T}-1\}$, we start at $\xtast[t] \geq 0$, for which it holds that $\nabla_i \g(\xtast[t]) = 0$ for $i\in\St[t-1]$ and there exists at least one $i^{(t)} \not \in \St[t-1]$ such that $\nabla_{i^{(t)}} \g(\xtast[t]) < 0$ unless we are already at the optimal solution, that is, $\xtast[t-1] = \xast$. We arbitrarily select one such index, and then perform Gram-Schmidt with respect to $\mathbb{Q}$ in order to obtain $\dt[t]$ that is $\mathbb{Q}$-orthogonal to all $\dt[k]$ for $k < t$. Next, one can see that the optimizer $\xt[t+1]$ along the line $\xt[t] +\eta^{(t)} \dt[t]$ results in the optimizer for the subspace $\spann{\{\canonical[i] \mid i \in \St[t] \}}$, which is $\xtast[t+1] \geq 0$. After $\sparsity$ iterations, we obtain $\xast$. We formalize and prove the claims of the overview below.
\begin{theorem}\linktoproof{thm:cd_approach}\label{thm:cd_approach}
For all $t\in \{0, 1, \ldots, \mathbb{T}\}$ and $k\in \{0,1,\ldots, t-1\}$, the following properties are satisfied for \cref{alg:sparse_conjugate_directions}:
\begin{enumerate}
\item \label{property:cd_approach_orthogonality} It holds that $\innp{ \dt[t], \mathbb{Q} \dt[k]} = 0$.
\item \label{property:cd_approach_orthogonality_gradient} We have that
$
\innp{\nabla \g (\xt[t]), \dt[k] }= 0
$
and
$\nabla_i \g(\xt[t]) = 0$ for all $i\in \St[t-1]$.
\item \label{property:cd_approach_x_monotone}
It is ${\mathbf{x}}_i^{(t)} > 0$ for all $i\in \St[t-1]$, and $\ensuremath{\mathbb{0}} = \xt[0]=\xtast[0] \leq \xt[1]=\xtast[1]\leq \ldots \leq \xt[\mathbb{T}]= \xtast[\mathbb{T}]$.
\item \label{property:cd_approach_optimality} It holds that $\xt[\mathbb{T}] = \xast$.
\end{enumerate} \end{theorem}
Unlike our next algorithm, \aspr{}, the time complexity of \cref{alg:sparse_conjugate_directions} does not depend on $\alpha$, $\L$, or $\epsilon$, and we optimize exactly. We detail the computational complexities of our algorithm below.
\begin{theorem}[Computational complexities]\label{thm:cd_approach_complexity}\linktoproof{thm:cd_approach_complexity}
The time complexity of \cref{alg:sparse_conjugate_directions}
is
$\bigo{| \suppast |^3 + | \suppast | \vol(\suppast)}$ and its space complexity is $\bigo{| \suppast |^2}$. \end{theorem}
\let\xtast\oldxtast \let\xt\oldxt \let\St\oldSt \let\Ct\oldCt \let\Sinit\oldSinit
}
\section{Accelerated Sparse PageRank}\label{sec:apgdappr} { \renewcommand\mathbb{T}{\newlink{def:final_iteration_T_of_ASPR}{T}}
\let\oldxtast\xtast \let\oldxt\xt \let\oldSt\St \let\oldCt\Ct \let\oldSinit\Sinit
\let\xtast\undefined \let\xt\undefined \let\St\undefined \let\Ct\undefined \let\Sinit\undefined
\NewDocumentCommand{\xtast}{oo}{
\newlink{def:optimizer_in_subspace_for_ASPR}{
\IfNoValueTF{#1}
{{\mathbf{x}}^{(\ast, t)}}
{
\IfNoValueTF{#2}
{{\mathbf{x}}^{(\ast, #1)}}
{x^{(\ast, #1)}_{#2}}
}
} }
\NewDocumentCommand{\xt}{oo}{
\newlink{def:iterate_xt_of_ASPR}{
\IfNoValueTF{#1}{
{{\mathbf{x}}^{(t)}}
}
{
\IfNoValueTF{#2}
{{\mathbf{x}}^{(#1)}}
{x^{(#1)}_{#2}}
}
} }
\newcommand\Sinit{\newlink{def:initial_set_of_known_good_coordinates_ASPR}{S^{( -1 )}}} \newcommandx*\St[1][1=t, usedefault]{\newlink{def:set_of_known_good_coordinates_ASPR}{S^{( #1 )}}} \newcommandx*\Ct[1][1=t, usedefault]{\newlink{def:span_of_known_good_coordinates_in_Rp_ASPR}{C^{( #1 )}}}
\begin{algorithm}
\caption{Accelerated projected gradient descent (\apgd{})} \label{alg:apgd} \begin{algorithmic}[1]
\REQUIRE Closed and convex set $ C\subseteq \mathbb{R}^{\n}$, initial point $\xt[0] \in C$, $f\colon C \to \mathbb{R}$ an $\alpha$-strongly convex and $\L$-smooth function, condition number $\kappa \defi \L/\alpha$, and $T\in \mathbb{N}$.
\ENSURE $\y[T]\in C$.
\hrule
\State $\z[0] \gets\y[0] \gets \x[0]$; \ \ $A_0 \gets 0$; \ \ $a_0\gets 1$
\FOR{$t= 0, 1, \ldots, T-1$}
\State $A_{t+1} \gets A_{t} + a_{t}$ \Comment{equal to $A_t (\frac{2\kappa}{2\kappa +1 - \sqrt{1+4\kappa}}) \geq A_t(1-\frac{1}{2\sqrt{\kappa}})^{-1}$ if $t\geq 1$}
\State $\x[t+1] \gets \frac{A_{t}}{A_{t+1}}\y[t] + \frac{a_{t}}{A_{t+1}} \z[t]$
\State $\z[t+1] \gets \proj{C}\mathopen{}\mathclose\bgroup\originalleft( \frac{\kappa-1+ A_{t}}{\kappa-1+A_{t+1}} \z[t] + \frac{a_{t}}{\kappa-1+ A_{t+1}} \mathopen{}\mathclose\bgroup\originalleft(\x[t+1] - \frac{1}{\alpha}\nabla f(\x[t+1])\aftergroup\egroup\originalright) \aftergroup\egroup\originalright)$
\State $\y[t+1] \gets \frac{A_{t}}{A_{t+1}}\y[t] + \frac{a_{t}}{A_{t+1}} \z[t+1]$
\State $a_{t+1} \gets A_{t+1}(\frac{2\kappa}{2\kappa +1 - \sqrt{1+4\kappa}}-1)$
\ENDFOR \end{algorithmic} \end{algorithm}
In this section, we introduce the accelerated sparse PageRank algorithm (\newtarget{def:acronym_accelerated_sparse_pagerank}{\aspr{}}) in \cref{alg:sparse_acceleration}, which is an approach based on accelerated projected gradient descent (\newtarget{def:acronym_accelerated_gradient_descent}{\apgd{}}) for addressing \eqref{eq:opt_equivalent}. Let $\xtast[0] = \ensuremath{\mathbb{0}}$, let $\Sinit = \emptyset$, and for $t\in[\mathbb{T}]$, let $\xtast[t] =\argmin_{{\mathbf{x}}\in \Ct[t-1]} \g({\mathbf{x}})$. We now explain the necessary modifications to the exact algorithm outlined in \cref{sec:intuition} such that an approximate solver of \eqref{eq:algs_solve_this} can be incorporated. First, we recall the convergence of accelerated projected gradient descent (\apgd{}) \citep{nesterov1998introductory} in \cref{alg:apgd}, which is used as a subroutine in \cref{alg:sparse_acceleration}. \apgd{} applied to the set $C\subseteq \mathbb{R}^{\n}$, initial point $\xt[0]\in C$, objective $f\colon C \to \mathbb{R}$, and number of iterations $\mathbb{T}\in\mathbb{N}$ is denoted by ${\mathbf{x}}\gets\apgd{}( C, \xt[0], f, \mathbb{T})$. For strongly convex objectives, \apgd{} enjoys the following convergence rate.
\begin{proposition}[Convergence rate of \apgd{}]\linktoproof{prop:apgd}\label{prop:apgd} Let $ C\subseteq \mathbb{R}^{\n}$ be a closed convex set, $\xt[0] \in C$, and $f\colon C\to\mathbb{R}$ an $\alpha$-strongly convex and $\L$-smooth function with minimizer $\xxast$. Then, for the iterates of \cref{alg:apgd}, it holds that $
f(\y[t]) - f(\xxast) \leq (1 - \frac{1}{2\sqrt{\kappa}})^{t-1} \frac{(\L-\alpha)\norm{\xt[0]-\xxast}^2}{2}, $ for $\kappa\defi\frac{\L}{\alpha}$.
We thus obtain an $\epsilon$-minimizer in $\mathbb{T} = 1+\ceil{2\sqrt{\kappa}\log(\frac{(\L-\alpha)\norm{\x[0]-\xxast}^2}{2\epsilon})} $
$\leq 1+\ceil{2\sqrt{\kappa}\log(\frac{(\L-\alpha)\norm{\nabla f(\xt[0])}_2^2}{2\epsilon\alpha^2})}$ iterations. \end{proposition}
As in the \cref{alg:sparse_conjugate_directions} in the previous section, our \cref{alg:sparse_acceleration} constructs a sequence of subsets $\St[t]$ of the support of $\xast$.
In contrast to {\textnormal{\texttt{CDPR}}}\xspace{}, \cref{alg:sparse_acceleration} does not compute $\xtast[t+1]$ for $t\in \{0,1,\ldots, \mathbb{T}-1\}$ exactly, but instead employs \apgd{} as a subroutine to construct a point $\xtbar[t+1]$ that is close enough to $\xtast[t+1]$, and then reduces all positive entries of $\xtbar[t+1]$ slightly, obtaining $\xt[t+1] \leq \xtast[t+1]$. The following \cref{lemma:A_positive} establishes that if a coordinate of a point is decreased, the gradient of $\g$ at all other coordinates does not decrease, implying that for all points ${\mathbf{x}}\in\mathbb{R}^n_{\geq 0}$ satisfying ${\mathbf{x}}\leq \xtast[t+1]$, no bad coordinate has a negative gradient.
\begin{algorithm}
\caption{Accelerated sparse PageRank algorithm (\aspr{})} \label{alg:sparse_acceleration} \begin{algorithmic}[1]
\REQUIRE Quadratic function $g:\mathbb{R}^{\n}\to \mathbb{R}$ with Hessian $\mathbb{Q} \succ 0$ being a symmetric $M$-matrix, accuracy $\epsilon > 0$. The $\ell_1$-regularized PageRank problem corresponds to choosing $\g$ as in \eqref{eq:g}.
\ENSURE $\xt[\mathbb{T}]$, where $\newtarget{def:final_iteration_T_of_ASPR}{\mathbb{T}} \in \mathbb{N}$ is the first iteration for which $\St[\mathbb{T}] = \St[\mathbb{T}-1]$.
\hrule
\State $t\gets 0$
\State $\xt[0] \gets \ensuremath{\mathbb{0}}$
\State $\St[t] \gets \{i\in[\n] \mid \nabla_i \g(\xt[t]) < 0\}$
\WHILE{$\St[t] \neq \St[t-1]$}
\State $\newtarget{def:retraction_parameter_delta_t}{\delta_t} \gets \sqrt{\frac{\epsilon\alpha}{(1+\card{\St[t]})\L^2}}$
\State $\newtarget{def:accuracy_parameter_of_APGD_subproblem}{\hatepsilon}_t \gets \frac{\delta_t^2 \alpha }{2} = \frac{\epsilon\alpha^2}{2(1+\card{\St[t]})\L^2}$
\State $\newtarget{def:span_of_known_good_coordinates_in_Rp_ASPR}{\Ct[t]} \gets \spann{\{\canonical[i] \mid i \in \St[t]\} } \cap \mathbb{R}^n_{\geq 0}$
\State $\newtarget{def:iterate_before_pulling_towards_zero}{\xtbar[t+1]} \gets \apgd{}\mathopen{}\mathclose\bgroup\originalleft(\Ct[t], \xt[t], \g, 1+\Big\lceil 2\sqrt{\kappa}\log\mathopen{}\mathclose\bgroup\originalleft(\frac{(\L-\alpha)\norm{\nabla_{\St[t]} \g(\xt[t])}_2^2}{2\hatepsilon_t\alpha^2}\aftergroup\egroup\originalright)\Big\rceil\aftergroup\egroup\originalright)$
\State $\newtarget{def:iterate_xt_of_ASPR}{\xt[t+1]} \gets \max\{\ensuremath{\mathbb{0}}, \xtbar[t+1]-\delta_t \ensuremath{\mathbb{1}}\}$ \Comment{coordinatewise $\max$, only needed for $i\in \St[t]$}
\State $\newtarget{def:set_of_known_good_coordinates_ASPR}{\St[t+1]} \gets \St[t] \cup\{i\in[\n] \mid \nabla_i \g(\xt[t+1]) < 0\}$ \label{line:expanding_S_t}
\State $t\gets t + 1$
\ENDWHILE \end{algorithmic} \end{algorithm}
\begin{lemma}\linktoproof{lemma:A_positive}\label{lemma:A_positive}
Let ${\mathbf{x}} \in \mathbb{R}^{\n}$, and let ${\mathbf{y}} = {\mathbf{x}} - \epsilon \canonical[i]$, for some $\epsilon > 0$, $i\in[\n]$. Then, for all $j\in [\n]\setminus\{i\}$, it holds that $\nabla_j \g({\mathbf{y}}) \geq \nabla_j \g({\mathbf{x}})$. If instead $\epsilon < 0$, then $\nabla_j \g({\mathbf{y}}) \leq \nabla_j \g({\mathbf{x}})$. \end{lemma} The second part of \cref{lemma:A_positive} implies that we only have $\nabla_i \g(\xt[t+1]) < 0$ for coordinates $i$ for which $\nabla_i \g(\xtast[t+1]) < 0$, but it suggests that there could be none satisfying the former. To address this issue, \apgd{} is run to sufficient accuracy to guarantee that $\g(\xt[t+1]) - \g(\xtast[t+1]) \leq \frac{\epsilon \alpha}{\L}$. Then, we show that either $\g(\xt[t+1]) -\g(\xast) \leq \epsilon$ or one step of \pgd{} from $\xt[t+1]$ would make more progress than what we can do in the current space $\Ct[t]$, of which $\xtast[t+1]$ is minimizer, and so the gradient contains a negative entry. All such entries are good coordinates $i\in\suppast\setminus \St[t]$, similarly to what we had at $\xtast[t+1]$ in {\textnormal{\texttt{CDPR}}}\xspace{}. We note that unlike for {\textnormal{\texttt{CDPR}}}\xspace{}, this time we can incorporate all of these coordinates at once to the algorithm. In \cref{prop:apgd_approach} below, we address all these challenges associated with computing $\xt[t+1]$ in \cref{alg:sparse_acceleration} in lieu of $\xtast[t+1]$, and we prove that indeed \cref{alg:sparse_acceleration} finds an $\epsilon$-minimizer of $\g$, while all the iterates are sparse, if the solution $\xast$ is sparse.
\begin{theorem}\linktoproof{prop:apgd_approach}\label{prop:apgd_approach}
Let $\newtarget{def:initial_set_of_known_good_coordinates_ASPR}{\Sinit} \defi \emptyset$, $\xtast[-1]\defi \ensuremath{\mathbb{0}}$, $\xtast[0] \defi \ensuremath{\mathbb{0}}$, and define $\newtarget{def:optimizer_in_subspace_for_ASPR}{\xtast[t]} \defi \argmin_{{\mathbf{x}}\in \Ct[t-1]} \g({\mathbf{x}})$ for $t \in[\mathbb{T}]$, where $\Ct[t-1]$ is defined in \cref{alg:sparse_acceleration}. For all $t \in \{0, 1, \dots, \mathbb{T}\}$, the following properties are satisfied for \cref{alg:sparse_acceleration}:
\begin{enumerate}
\item \label{property:apgd_approach_positivity_grad} It holds $\xtast[t][i] > 0$ if and only if $i \in \St[t-1]$. We also have $\nabla_i \g(\xtast[t]) = 0$ if $i\in \St[t-1]$.
\item \label{property:apgd_approach_x_monotone} It is $ \xt[t] \leq \xtast[t] \leq \xast$ and $\xtast[t-1] \leq \xtast[t]$.
\item \label{property:apgd_approach_S_monotone} Our set of known good indices expands $\St[t-1] \subsetneq \St[t] \defi \St[t-1] \cup \mathopen{}\mathclose\bgroup\originalleft\{i\in [\n] \mid \nabla_i \g(\xt[t])<0 \aftergroup\egroup\originalright\} \subseteq \suppast$, or $\xt[t]$ is an $\epsilon$-minimizer of $\g$. In particular, $\g(\xt[\mathbb{T}]) - \g(\xast) \leq \epsilon$.
\end{enumerate} \end{theorem}
Note that by the previous theorem, we have the chain $\ensuremath{\mathbb{0}} = \xtast[0] \leq \xtast[ 1] \leq \ldots \leq \xtast[ \mathbb{T}] \leq \xast$ and $\Sinit \subsetneq \St[0] \subsetneq \ldots \subsetneq \St[\mathbb{T}-1] = \St[\mathbb{T}] \subseteq \suppast$. This implies that every iterate of \cref{alg:sparse_acceleration} only updates coordinates in $\suppast$. Thus, the final computational complexity of this accelerated method, specified below, depends on the sparsity of the solution and related quantities, answering the question posed by \citep{fountoulakis2022open} in the affirmative. \begin{theorem}[Computational complexities]\linktoproof{thm:apgd_approach_complexity}\label{thm:apgd_approach_complexity}
The time complexity of \cref{alg:sparse_acceleration}
is \begin{align*}
\bigotildel{\sparsity\intvol(\suppast)\sqrt{\frac{\L}{\alpha}} + \sparsity\vol(\suppast)}, \end{align*}
and its space complexity is $\bigo{\sparsity}$. \end{theorem}
The question of \citet{fountoulakis2022open} suggested that one has to possibly trade-off lower dependence on the condition number for greater dependence on the sparsity. Surprisingly, the term $\sparsity \intvol(\suppast)$ multiplying the condition-number term can be smaller than the corresponding term $\vol(\suppast)$ of {\textnormal{\texttt{ISTA}}}\xspace{}, so in such a case the accelerated method also improves on the dependence on the sparsity, and it enjoys an overall lower running time if $\sparsity < \L/\alpha$, see \cref{sec:algorithmic_comparisons}.
\subsection[Variants of Algorithm~\ref{alg:sparse_acceleration}]{Variants of \cref{alg:sparse_acceleration}} An attractive property of \cref{alg:sparse_acceleration} is that by performing minor modifications to it, we can exploit the geometry to stop the \apgd{} subroutine earlier, or we can naturally incorporate new lower bounds on the coordinates of $\xast$ into the algorithm.
\subsubsection[Early Termination of APGD in Algorithm \ref{alg:sparse_acceleration}]{Early Termination of \apgd{} in \aspr{}} We first present another lemma about the geometry of Problem \eqref{eq:opt_equivalent}.
\begin{lemma}\linktoproof{lemma:modification}\label{lemma:modification}
Let $S$ be a set of indices such that $\xtast[C] \defi \argmin_{{\mathbf{x}} \in C} \g({\mathbf{x}})$ satisfies $\xtast[C][j] > 0 $ if and only if $j \in S$, where $C \defi \spann{\{\canonical[j] \mid j \in S \}} \cap \mathbb{R}_{\geq 0}^{\n}$. Let ${\mathbf{x}}\in\mathbb{R}^n_{\geq 0}$ be such that $x_j = 0$ if $j\not\in S$ and $\nabla_j \g({\mathbf{x}}) \leq 0$ if $j\in S$. Then, for any coordinate $i\not\in S$ such that $\nabla_i \g({\mathbf{x}}) < 0$, we have $i\in\suppast$. \end{lemma}
By Statement~\ref{property:apgd_approach_positivity_grad} in \cref{prop:apgd_approach}, we can apply \cref{lemma:modification} with $\St[t]$ in \cref{alg:sparse_acceleration}, for any $t\in\{0,1,\ldots, \mathbb{T}-1\}$. This motivates the following modification to \cref{alg:sparse_acceleration}: In \aspr{}, if we compute the full gradient at each iteration (or every few iterations) in the \apgd{} subroutine, then, for an iterate ${\mathbf{x}}$ in \apgd{}, if we have $\nabla_i \g({\mathbf{x}}) \leq 0$ for all $i\in \St[t]$ and we observe some $j \not\in \St[t]$ such that $\nabla_j \g({\mathbf{x}}) < 0$, then we can stop the \apgd{} subroutine, incorporate all such coordinates to $\St[t+1]$ and continue with the next iteration of \cref{alg:sparse_acceleration}.
This modification does not come without drawbacks, since we need to compute full gradients in order to discover new coordinates early, instead of just gradients restricted to $\St[t]$. Interestingly, we can show that if we were to compute one full gradient for each iteration of the \apgd{} subroutine, then the complexity of the conjugate directions method is no worse than the upper bound on the complexity for this variant of \cref{alg:sparse_acceleration}, in the regime in which we prefer to use these algorithms over the {\textnormal{\texttt{ISTA}}}\xspace{}-approach in \citet{fountoulakis2019variational}. Indeed, the complexity of this variant is $\bigotilde{\sparsity\volast \sqrt{\L/\alpha}}$, and the complexity of \cref{alg:sparse_conjugate_directions}, which is $\bigo{\sparsity^3 + \sparsity \vol(\suppast)}$, can be upper bounded by $\bigo{\sparsity^2 \volast}$. If the complexity of the variant is better, up to constants and log factors, then we can exchange another $\sparsity$ term by $\sqrt{\L/\alpha}$ to conclude that this complexity is no better than the complexity of the {\textnormal{\texttt{ISTA}}}\xspace{} approach $\bigotilde{\volast \frac{\L}{\alpha}}$, up to constants and log factors. Nonetheless, one can always compute the full gradient only sporadically to discover new good coordinates earlier, and we expect the empirical performance of \cref{alg:sparse_acceleration} to improve by implementing this modification. In future work, we will extensively test our algorithms with this and other variants to assess their practical performance.
\subsubsection{Updating Constraints}\label{sec:update_constraints}
In \cref{alg:sparse_acceleration}, every time we observe $\nabla_i \g({\mathbf{x}}) \leq 0$ for all $i \in \St[t]$, whether for the iterates of the \apgd{} subroutine or for $\xt[t+1]$, we have by Statement \ref{property:pgd_helper_monotone} of \cref{proposition:pgd_helper} and Statement \ref{property:apgd_approach_x_monotone} of \cref{prop:apgd_approach} that ${\mathbf{x}} \leq \xtast[t+1] \leq \xast$. Using this new lower bound on the coordinates of $\xast$, we can update our constraints. If we initialize the constraints to $\bar{C} \gets \mathbb{R}_{\geq 0}^{\n}$, we can update them to $\bar{C}\gets\bar{C} \cap \{{\mathbf{y}} \in \mathbb{R}_{\geq 0}^{\n} \ |\ {\mathbf{y}} \geq {\mathbf{x}}\}$ every time we find one such point ${\mathbf{x}}$. This can help avoiding the momentum of \apgd{} taking us far unnecessarily. We note that these constraints are isomorphic to the positive orthant, and only require storing up to $\sparsity$ numbers.
\let\xtast\oldxtast \let\xt\oldxt \let\St\oldSt \let\Ct\oldCt \let\Sinit\oldSinit
}
\section{Conclusion}
We successfully integrated acceleration of optimization techniques into the field of graph clustering, thereby answering the open question raised in \citet{fountoulakis2019variational}. Our results provide evidence of the efficacy of this approach, demonstrating that optimization-based algorithms can be effectively employed to address graph-based learning tasks with great efficiency at scale. This work holds the potential to inspire the development of new algorithms that leverage the power of advanced optimization techniques to tackle other graph-based challenges in a scalable manner.
\acks{ This research was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – The Berlin Mathematics Research Center MATH$^+$ (EXC-2046/1, project ID 390685689, BMS Stipend). }
\printbibliography[heading=bibintoc]
\appendix
\section{Missing Proofs} \begin{remark} We recall that, as we pointed out in \cref{sec:pgd}, \ISTA{} on $\f$ is equivalent to projected gradient descent in $\mathbb{R}_{\geq 0}^{\n}$ on $\g$. \cref{proposition:pgd_helper} allows to quite simply recover the result in \citep{fountoulakis2019variational} about \ISTA{} initialized at $\ensuremath{\mathbb{0}}$ having iterates with support in $\suppast$. Moreover, our argument below applies to the optimization of the more general problem where $\mathbb{Q}$ is an arbitrary symmetric positive-definite $M$-matrix.
Indeed, we satisfy the assumptions of \cref{proposition:pgd_helper} for initial point $\x[0]$ and set of indices $S \gets \{i\ |\ \nabla_ig(\x[0]) < 0\}$, which is non-empty unless $\x[0]$ is the solution $\xast$. Let the corresponding feasible set be $C\defi\mathspan(\{\canonical[i] \ | \ i \in S \}) \cap \mathbb{R}_{\geq 0}^{\n}$. Now, while the iterates of $\pgd{}(\mathbb{R}_{\geq 0}, \x[0], \g, \cdot)$ remain in $C$, that is, for $t$ such that $\x[t] \in C$, they behave exactly like $\pgd{}(C, \x[0], \g, \cdot)$ and so, we have the following invariant by the proof of the \cref{proposition:pgd_helper}: $\x[t] \leq \xtast[C]$ and $\nabla_{S} \g(\x[t]) \leq 0$ and $\x[t] \leq \x[t+1]$. If this algorithm leaves $C$ at step $t$, that is $\x[t] \in C$ and $\x[t+1] \not \in C$, we have $\nabla_{\supp(\x[t+1])} \g(\x[t]) \leq 0$, since the invariant guarantees $\nabla_{S} \g(\x[t]) \leq 0$ and if $i \in \supp(\x[t+1]\setminus S$, it must be $\nabla_i \g(\x[t]) < 0$ by definition of the \pgd{} update rule. In particular, we can apply \cref{proposition:pgd_helper} again with initial point $\x[t]$ and the larger set of indices $\supp(\x[t+1])$, and so on, proving that the invariant $\nabla_{\supp(\x[t])} \g(\x[t]) \leq 0$ and $\x[t] \leq \x[t+1]$ holds for all $t \geq 0$. By the global convergence of $\pgd{}(\mathbb{R}_{\geq 0}, \x[0], \g, \cdot)$, we have $\x[t] \leq \xast$ for all $t \geq 0$, so it is always $\supp(\x[t])\subseteq \suppast$. \end{remark}
In the rest of this section, we present proofs not found in the main text.
{ \renewcommand\mathbb{T}{\newlink{def:final_iteration_T_of_conjugate_directions}{T}}
\let\oldxtast\xtast \let\oldxt\xt \let\oldSt\St \let\oldCt\Ct \let\oldSinit\Sinit
\let\xtast\undefined \let\xt\undefined \let\St\undefined \let\Ct\undefined \let\Sinit\undefined
\NewDocumentCommand{\xtast}{oo}{
\newlink{def:optimizer_in_subspace_for_CDPR}{
\IfNoValueTF{#1}
{{\mathbf{x}}^{(\ast, t)}}
{
\IfNoValueTF{#2}
{{\mathbf{x}}^{(\ast, #1)}}
{x^{(\ast, #1)}_{#2}}
}
} }
\NewDocumentCommand{\xt}{oo}{
\newlink{def:iterate_xt_of_CDPR}{
\IfNoValueTF{#1}{
{{\mathbf{x}}^{(t)}}
}
{
\IfNoValueTF{#2}
{{\mathbf{x}}^{(#1)}}
{x^{(#1)}_{#2}}
}
} }
\newcommand\Sinit{\newlink{def:initial_set_of_known_good_coordinates_CDPR}{S^{( -1 )}}} \newcommandx*\St[1][1=t, usedefault]{\newlink{def:set_of_known_good_coordinates_CDPR}{S^{( #1 )}}} \newcommandx*\Ct[1][1=t, usedefault]{\newlink{def:span_of_known_good_coordinates_in_Rp_CDPR}{C^{( #1 )}}}
\begin{proof}\linkofproof{thm:cd_approach}
We prove the properties in order:
\begin{enumerate}
\item [\ref{property:cd_approach_orthogonality}.] For $t = 0$, the statement is trivial. Let $t\in \{0,1,\ldots, \mathbb{T}-1\}$ and assume that $\innp{ \dt[j], \mathbb{Q} \dt[k]} = 0$ for all $j,k\in \{0,1,\ldots, t\}$ such that $j>k$. Then,
\begin{align*}
\innp{ \dt[t+1], \mathbb{Q} \dt[k]} & {=} \innp{ \ut[t+1] {+} \sum_{k=0}^{t} \beta_k^{(t+1)} \dt[k], \mathbb{Q} \dt[k]} = \innp{\ut[t+1], \mathbb{Q}\dt[k] } + \beta_k^{(t+1)} \innp{\dt[k], \mathbb{Q}\dt[k]} = 0,
\end{align*}
where the second and third equalities follow from the induction hypothesis and the definition of $\beta^{(t+1)}_k$, respectively.
\item
[\ref{property:cd_approach_orthogonality_gradient}.]
By induction.
For $t=0$, there is nothing to prove. For some $t\in \{0,1,\ldots, \mathbb{T}-1\}$ suppose that for all $k\in \{0,1,\ldots, t\}$, it holds that
$
\innp{\nabla \g (\xt[t]), \dt[k] }= 0.
$
Then, since $\xt[t+1]= \xt[t] + \eta^{(t)}\dt[t]$, we have $\nabla \g(\xt[t+1]) = \nabla \g (\xt[t]) + \eta^{(t)} \mathbb{Q} \dt[t]$. Thus, by \cref{property:cd_approach_orthogonality} and the induction hypothesis, we have
\begin{align*}
\innp{\nabla \g (\xt[t+1]), \dt[k] } & = \innp{\nabla \g (\xt[t]), \dt[k] } + \eta^{(t)}\innp{ \mathbb{Q} \dt[t], \dt[k] } = 0
\end{align*}
for all $k < t$. By the definition of $\eta^{(t)}$, we also have $\innp{\nabla \g (\xt[t+1]), \dt[t] } = 0$. Thus,
\begin{align}\label{eq:orthogonality}
\innp{\nabla \g (\xt[t+1]), \dt[k] }= 0 \qquad \text{for all $t\in \{0,1,\ldots, \mathbb{T}-1\}$ and $k\in \{0,1,\ldots, t\}$.}
\end{align}
Thus, since for $t\in\{0, 1,\ldots, \mathbb{T}\}$, $\spann{\{\dt[0], \dt[1], \ldots, \dt[t-1]\}} = \spann{\{\canonical[i] \mid \canonical[i] \in \St[t-1]\}}$, it holds that $\nabla_i \g(\xt[t]) = 0$ for all $i\in \St[t-1]$ by \eqref{eq:orthogonality}.
\item [\ref{property:cd_approach_x_monotone}.]
By induction. For $t=0$, it holds that ${\mathbf{x}}_i^{(0)} > 0$ for all $i\in \Sinit = \emptyset$ and $\ensuremath{\mathbb{0}} = \xt[0]=\xtast[0]$.
Suppose that the statement holds for some $t\in\{0,1,\ldots, \mathbb{T}-1\}$, that is,
${\mathbf{x}}_i^{(t)} > 0$ for all $i\in \St[t-1]$ and
$\ensuremath{\mathbb{0}} = \xt[0]=\xtast[0] \leq \xt[1]=\xtast[1]\leq \ldots \leq \xt[t]= \xtast[t]$.
By \cref{proposition:pgd_helper} applied to $\g$, $\St[t]$, and $\xt[t] = \xtast[t]$, we have that $\xtast[t+1][i] > 0$ for all $i\in \St[t]$ and $\nabla_i \g(\xtast[t+1]) = 0$ for all $i\in \St[t]$, that is,
$\xtast[t+1]=\argmin_{{\mathbf{x}} \in \spann{\{\canonical[i] \mid i\in \St[t]\}}} \g({\mathbf{x}})$.
By \cref{property:cd_approach_orthogonality_gradient}, $\nabla_ig(\xt[t+1]) = 0$ for all $i\in \St[t]$, that is,
$\xt[t+1]=\argmin_{{\mathbf{x}} \in \spann{\{\canonical[i] \mid i\in \St[t]\}}} \g({\mathbf{x}})$. By strong convexity of $\g$ restricted to $\spann{\{\canonical[i] \mid i\in \St[t]\}}$, $\xtast[t+1]=\xt[t+1]$.
\item [\ref{property:cd_approach_optimality}.] By \cref{property:cd_approach_x_monotone}, $ \ensuremath{\mathbb{0}} \leq \xt[\mathbb{T}]$, that is, $\xt[\mathbb{T}]$ is a feasible solution to the optimization problem \eqref{eq:opt_equivalent}. By \cref{property:cd_approach_orthogonality_gradient}, $\nabla_i \g(\xt[\mathbb{T}]) = 0$ for all $i\in \St[\mathbb{T}-1]$. Since $\mathbb{T}\in \mathbb{N}$ is the first iteration for which $N^{(\mathbb{T})} = \mathopen{}\mathclose\bgroup\originalleft\{i\in[\n] \mid \nabla_i \g(\xt[\mathbb{T}]) < 0\aftergroup\egroup\originalright\} = \emptyset$, $\xt[\mathbb{T}]$ satisfies the optimality conditions \eqref{eq:new_optimality_conditions} and $\xt[\mathbb{T}] = \xast$.
\end{enumerate} \end{proof}
\begin{proof}\linkofproof{thm:cd_approach_complexity}
We run \cref{alg:sparse_conjugate_directions} for $| \suppast |$ iterations. We summarize the costs of operations performed during one iteration $t\in\{0, 1, \ldots, \mathbb{T}\}$. The cost of computing $N^{(t+1)}$ is $\bigo{\vol(\suppast)}$. Note we do not need to store the gradient, at most we would store $\nabla_{\St[t+1]} \g(\xt[t+1])$, and this is not necessary. Note that the vectors $\xt[t+1]$ and $\dt[t]$ and $\dtbar[t]$ are sparse, their support is in $\suppast$. Thus, computing $\bar{\dt[t]}$ takes $\bigo{\intvol(\suppast)}$ and computing $\eta^{(t)}$ and $\xt[t+1]$ takes $\bigo{\sparsity}$. Finally, we discuss the complexity of computing $\beta^{(t)}_k$ for $k < t$. In order to compute these values efficiently throughout the algorithm's execution, we stored our normalized $\mathbb{Q}$-orthogonal partial basis consisting of the vectors $\dtbar[k]$, for all $k \in \{0, 1, \ldots, \mathbb{T}\}$. Since $\supp( \ut[t] ) = 1$, the cost of computing one $\beta^{(t)}_k$ is only $\bigo{\sparsity}$ and thus computing all them for $k < t$ and computing $\dt[t]$ takes $\bigo{\sparsity^2}$ operations.
In summary, the time complexity of \cref{alg:sparse_conjugate_directions} is $\bigo{\sparsity^3 +\sparsity\vol(\suppast)}$. The space complexity of \cref{alg:sparse_conjugate_directions} is dominated by the cost of storing $\dt[k]$ for $k \in \{0, 1, \ldots, t-1\}$, which is $\bigo{\sparsity^2}$. \end{proof}
\let\xtast\oldxtast \let\xt\oldxt \let\St\oldSt \let\Ct\oldCt \let\Sinit\oldSinit
}
\begin{proof}\linkofproof{prop:apgd}
The proof of the first part is derived from \citet[Theorem 4.10]{diakonikolas2019approximate}. The second part is a straightforward corollary. For any $\mathbb{T} \geq 1+\ceil{2\sqrt{\kappa}\log(\frac{(\L-\alpha)\norm{\x[0]-\xxast}^2}{2\epsilon})}$ we have \begin{align*}
f(\y[\mathbb{T}]) - f(\xast) & \leq \mathopen{}\mathclose\bgroup\originalleft(1 - \frac{1}{2\sqrt{\kappa}}\aftergroup\egroup\originalright)^{\mathbb{T}-1} \frac{(\L-\alpha)\norm{\xt[0]-\xxast}^2}{2} & \text{$\triangleright$ by the first part of \cref{prop:apgd}}\\
& \leq \exp\mathopen{}\mathclose\bgroup\originalleft(-\frac{1}{2\sqrt{\kappa}}(\mathbb{T}-1)\aftergroup\egroup\originalright) \frac{(\L-\alpha)\norm{\xt[0]-\xxast}^2}{2} & \text{$\triangleright$ since $(1 + x) \leq e^x$ for all $x\in \mathbb{R}$}\\
& \leq \epsilon \end{align*}
In particular, by $\alpha$-strong convexity of $f$, we have $\norm{\xt[0]-\xxast}^2 \leq \frac{\norm{\nabla f(\xt[0])}_2^2}{\alpha^2} $ so we obtain an $\epsilon$-minimizer after $1+\ceil{2\sqrt{\kappa}\log(\frac{(\L-\alpha)\norm{\nabla f(\xt[0])}_2^2}{2\epsilon\alpha^2})}$ iterations. \end{proof}
\begin{proof}\linkofproof{lemma:A_positive}
Let $i, j \in[\n]$ such that $i\neq j$. Geometrically, since the gradient of $\g$ is an affine function, the set of points ${\mathbf{y}}$ for which $\nabla_j \g({\mathbf{y}}) \geq c$ for some value $c$, forms a halfspace. Fixing $x_i$, any point otherwise coordinatewise smaller than ${\mathbf{x}}$ does not increase in gradient, since the off-diagonal entries of $\mathbb{Q}$ are non-positive. That is, the corresponding $(\n-1)$-dimensional halfspace is defined by a packing constraint \citep{allenzhu2019nearly, criado2021fast}. Formally, we have \begin{align*}
\nabla_jg({\mathbf{y}})-\nabla_jg({\mathbf{x}}) & = (\mathbb{Q}{\mathbf{y}})_j - (\mathbb{Q}{\mathbf{x}})_j = -\epsilon(\mathbb{Q}\canonical[i])_j = - \epsilon \mathbb{Q}_{j,i} \geq 0, \end{align*}
where the last inequality uses $\mathbb{Q}_{i,j} = -\frac{(1-\alpha)\A_{i,j}}{2d_i d_j} \leq 0$. The second statement is analogous. \end{proof}
{ \renewcommand\mathbb{T}{\newlink{def:final_iteration_T_of_ASPR}{T}}
\let\oldxtast\xtast \let\oldxt\xt \let\oldSt\St \let\oldCt\Ct \let\oldSinit\Sinit
\let\xtast\undefined \let\xt\undefined \let\St\undefined \let\Ct\undefined \let\Sinit\undefined
\NewDocumentCommand{\xtast}{oo}{
\newlink{def:optimizer_in_subspace_for_ASPR}{
\IfNoValueTF{#1}
{{\mathbf{x}}^{(\ast, t)}}
{
\IfNoValueTF{#2}
{{\mathbf{x}}^{(\ast, #1)}}
{x^{(\ast, #1)}_{#2}}
}
} }
\NewDocumentCommand{\xt}{oo}{
\newlink{def:iterate_xt_of_ASPR}{
\IfNoValueTF{#1}{
{{\mathbf{x}}^{(t)}}
}
{
\IfNoValueTF{#2}
{{\mathbf{x}}^{(#1)}}
{x^{(#1)}_{#2}}
}
} }
\newcommand\Sinit{\newlink{def:initial_set_of_known_good_coordinates_ASPR}{S^{( -1 )}}} \newcommandx*\St[1][1=t, usedefault]{\newlink{def:set_of_known_good_coordinates_ASPR}{S^{( #1 )}}} \newcommandx*\Ct[1][1=t, usedefault]{\newlink{def:span_of_known_good_coordinates_in_Rp_ASPR}{C^{( #1 )}}}
\begin{proof}\linkofproof{prop:apgd_approach}
Because we have $\Sinit=\emptyset$, $\xtast[-1] = \xtast[0] = \xt[0] = \ensuremath{\mathbb{0}}$, and by definition it is $\xast \in \mathbb{R}_{\geq 0}^{\n}$, we have that the first two properties hold trivially for $t=0$. \cref{property:apgd_approach_S_monotone} also holds for $t=0$. Indeed, if the set of known good indices does not expand, we have $\nabla \g(\xt[0]) \geq 0$ and so we have $\xt[0] = \ensuremath{\mathbb{0}} = \xast$, and thus $\xt[0]$ is an $\epsilon$-minimizer of $\g$ for any $\epsilon > 0$.
We now prove the three properties inductively. Fix $t \in \{0, 1, \dots, \mathbb{T}-1\}$ and assume \cref{property:apgd_approach_positivity_grad,property:apgd_approach_x_monotone,property:apgd_approach_S_monotone} hold for this choice of $k \in \{0, \dots, t\}$. We will prove they hold for $t+1$.
The value of the accuracy $\hatepsilon_t$ in \cref{alg:sparse_acceleration} was chosen to compute $\xtbar[t+1]$ close enough to $\xtast[t+1]$. In particular, we have \begin{align}\label{eq:dist_to_opt_less_than_delta}
\begin{aligned}
\norm{\xtbar[t+1]-\xtast[t+1]}^2 \circled{1}[\leq] \frac{2}{\alpha} (\g(\xt[t+1]) - \g(\xtast[t+1])) \circled{2}[\leq] \frac{2\hatepsilon_t}{\alpha} \circled{3}[=] \delta_t^2, \end{aligned} \end{align}
where we used $\alpha$-strong convexity of $\g$ for $\circled{1}$, the convergence guarantee of \apgd{} on $\xtbar[t+1]$ for $\circled{2}$, and for $\circled{3}$ we used the definition of $\hatepsilon_t$. The above allows to show that $\xt[t+1] \defi \max\{\ensuremath{\mathbb{0}}, \xtbar[t+1]-\delta_t \ensuremath{\mathbb{1}}\}$, where the $\max$ is taken coordinatewise, satisfies \begin{equation}\label{eq:xt_leq_xtast}
\xt[t+1] \leq \xtast[t+1]. \end{equation} Suppose this property does not hold and that for some $i$ we have $\xt[t+1][i] > \xtast[t+1][i] \geq 0$. Then, we would have that $\xt[t+1][i] =\xtbar[t+1][i] - \delta_t$ and
\[
\norm{\xtbar[t+1]-\xtast[t+1]} \geq \abs{\xtbar[t+1][i]-\xtast[t+1][i]} \geq \xtbar[t+1][i]-\xtast[t+1][i] = \xt[t+1][i] + \delta_t -\xtast[t+1][i] > \delta_t,
\]
which is a contradiction. Note that we have
\begin{equation}\label{eq:neg_grad_at_xtast}
\nabla_j \g(\xtast[t+1]) \circled{1}[\leq] \nabla_j \g(\xt[t+1]) \circled{2}[<] 0 \qquad \text{for all} \ j \in \St[t+1]\setminus \St[t],
\end{equation}
since $\circled{2}$ holds by definition of $\St[t+1]$ and $\circled{1}$ is due to \cref{lemma:A_positive} and the fact that we can write $\xt[t+1] = \xtast[t+1] - \sum_{i\in \St[t]}\omega_i \canonical[i]$ for some $\omega_i \in\mathbb{R}_{\geq 0}$, since we just proved $\xt[t+1] \leq \xtast[t+1]$ in \eqref{eq:xt_leq_xtast}, and by construction $\supp(\xt[t+1])\subseteq \St[t]$ and $\supp(\xtast[t+1]) \subseteq \St[t]$.
We now show that
\begin{equation}\label{eq:xtast_leq_xtPlus1ast}
\xtast[t] \leq \xtast[t+1].
\end{equation}
This fact holds by Item \ref{property:pgd_helper_monotone} of \cref{proposition:pgd_helper} with starting point $\xtast[t]$ and $S \gets \St[t]$, which makes it ${\mathbf{x}}^{(\ast, C)} \gets \xtast[t+1]$. The assumptions of \cref{proposition:pgd_helper} hold since $\xtast[t] = 0$ if $i\in[\n]\setminus\St[t] \subseteq [\n]\setminus\St[t-1]$ by construction and we have $\nabla_i \g(\xtast[t]) = 0$ for $i\in\St[t-1]$ by induction hypothesis of \cref{property:apgd_approach_positivity_grad} and $\nabla_i \g(\xtast[t]) < 0$ for $i\in\St[t] \setminus \St[t-1]$ by the same argument we provided to show \eqref{eq:neg_grad_at_xtast}. In the same context, we also use Item \ref{property:pgd_helper_positivity} of \cref{proposition:pgd_helper}, using the fact that $\nabla_i \g(\xtast[t]) < 0$ for $i\in\St[t] \setminus \St[t-1]$ and that by \cref{property:apgd_approach_positivity_grad} for $t$, we have $\xtast[t][i] > 0$ for all $i \in \St[t-1]$. Therefore, we conclude $\xtast[t+1][i] > 0$ for all $i\in \St[t]$, which means we proved \cref{property:apgd_approach_positivity_grad} for $t+1$.
Moreover, now using Item \ref{property:pgd_helper_subset} of \cref{proposition:pgd_helper} in this context, we conclude
\begin{equation}\label{eq:xtast_leq_xast}
\xtast[t+1] \leq \xast.
\end{equation}
Thus, \cref{property:apgd_approach_x_monotone} holds for $t+1$ since we proved \eqref{eq:xt_leq_xtast}, \eqref{eq:xtast_leq_xtPlus1ast} and \eqref{eq:xtast_leq_xast}.
Now we prove \cref{property:apgd_approach_S_monotone} for $t+1$. We note that the value of $\delta_t$ was chosen so that the retracted point $\xt[t+1]$ still enjoys a small enough gap:
\begin{align} \label{eq:retracted_point_still_has_small_gap}
\begin{aligned}
\g(\xt[t+1] ) - \g(\xtast[t+1]) &\circled{1}[\leq] \frac{\L}{2} \norm{\xt[t+1] - \xtast[t+1]}_2^2 \\
&\circled{2}[\leq] \L (\norm{\xtbar[t+1] - \xtast[t+1]}_2^2 + \card{\St[t]} \delta_t^2) \\
& \circled{3}[\leq] \L (1 + \card{\St[t]}) \delta_t^2 \\
&\circled{4}[\leq] \frac{\epsilon\alpha}{\L}.
\end{aligned}
\end{align}
Above, $\circled{1}$ uses the optimality of $\xtast[t+1]$ and $\L$-smoothness, while $\circled{2}$ holds because by construction of $\xt[t+1]$, we have $\sum_{i\in \St[t]} \abs{\xt[t+1][i]-\xast[i]}^2 \leq \sum_{i\in \St[t]} (\abs{\xtbar[t+1][i]-\xast[i]} +\delta_t)^2 \leq 2 \sum_{i\in \St[t]} (\xtbar[t+1][i]-\xast[i])^2+2\card{ \St[t]} \delta_t^2$. We have $\circled{3}$ by \eqref{eq:dist_to_opt_less_than_delta} and $\circled{4}$ holds by the definition of $\delta_t$, which was made to satisfy this inequality.
We now show that if $\xt[t+1]$ is not an $\epsilon$-minimizer of $\g$ in $\mathbb{R}_{\geq 0}^{\n}$, then one step of \pgd{} makes more progress than what can be made in $\Ct[t+1]$, by \eqref{eq:retracted_point_still_has_small_gap}, and so \pgd{} explores a new coordinate, that is, we have a coordinate $i$ with $\nabla_i \g(\xt[t+1]) < 0$ and we can extend the set of good coordinates $\St[t]$. Indeed, suppose that $\g(\xt[t+1]) - \g(\xast) > \epsilon$.
Let ${\mathbf{y}}^{(t+1)}= \proj{\mathbb{R}^n_{\geq 0}}\mathopen{}\mathclose\bgroup\originalleft(\xt[t+1] - \nabla \g(\xt[t+1])\aftergroup\egroup\originalright)$.
We use the following property in \citep[Equation below (23)]{fountoulakis2019variational} from the guarantees of {\textnormal{\texttt{ISTA}}}\xspace{} on the problem, or equivalently on $\pgd{}\mathopen{}\mathclose\bgroup\originalleft(\Ct[t+1], \xt[t], \g, 1\aftergroup\egroup\originalright)$, see \cref{sec:pgd}. We have
\begin{align}\label{eq:projected_gd_guarantee_in_lemma}
\g({\mathbf{y}}^{(t+1)}) - \g(\xast) \leq \mathopen{}\mathclose\bgroup\originalleft(1 - \frac{\alpha}{\L}\aftergroup\egroup\originalright) (\g(\xt[t+1]) - \g(\xast)).
\end{align}
Consequently, we obtain
\begin{align*}
\g(\xt[t+1]) - \g(\xtast[t+1]) &\circled{1}[\leq] \frac{\epsilon\alpha}{\L} \circled{2}[<] \frac{\alpha}{\L} (\g(\xt[t+1]) - \g(\xast)) \circled{3}[\leq] \g(\xt[t+1]) - \g({\mathbf{y}}^{(t+1)}),
\end{align*}
where $\circled{1}$ holds by \eqref{eq:retracted_point_still_has_small_gap}, $\circled{2}$ holds by our earlier assumption $\g(\xt[t+1])-\g(\xast) > \epsilon$, and $\circled{3}$ is obtained by \eqref{eq:projected_gd_guarantee_in_lemma} after adding $\g(\xt[t+1]) - \g({\mathbf{y}}^{(t+1)})$ to both sides, and reorganizing. Hence,
$
\g(\xtast[t+1]) > \g({\mathbf{y}}^{(t+1)}).
$
Since $\xtast[t+1]$ is the minimizer of $\g$ in $\Ct[t]$, it holds that ${\mathbf{y}}^{(t+1)} \not \in \Ct[t]$ and so $\nabla_ig(\xt[t+1]) < 0 $ for at least one $i \not\in \St[t]$, and $\St[t] \subsetneq \St[t+1]$.
It remains to prove that $\St[t+1] \subseteq \suppast$. For $t+1 = \mathbb{T}$ it is $\St[\mathbb{T}-1] = \St[\mathbb{T}]$ and the property holds by induction hypothesis. For the case $t+1 \neq \mathbb{T}$, suppose the property does not hold and so there exists $j \not\in\St[t]$ such that $j\not\in\suppast$ and $\nabla_j \g(\xt[t+1]) < 0$. In that case, we have by \eqref{eq:neg_grad_at_xtast} that $\nabla_j \g(\xtast[t+1]) < 0$. On the other hand, it is $\nabla_i \g(\xtast[t+1]) = 0$ and $\xtast[t+1][i] > 0$ for $i \in \St[t]$ by \cref{property:apgd_approach_positivity_grad} and so we can apply \cref{proposition:pgd_helper} with $S\gets \St[t] \cup\{j\}$ and initial point $\xtast[t+1]$ to conclude a contradiction, since by Item \ref{property:pgd_helper_positivity} it is $x^{(\ast, C)}_j > 0$ but by Item \ref{property:pgd_helper_subset} we have $x^{(\ast, C)}_j \leq \xast[j] =0$.
Finally, by \cref{property:apgd_approach_S_monotone} for $t=\mathbb{T}$, since $\St[\mathbb{T}-1]$ does not expand, that is, $\St[\mathbb{T}-1] = \St[\mathbb{T}]$, it must be $\g(\xt[\mathbb{T}]) - \g(\xast) \leq \epsilon$.
\end{proof}
\begin{proof}\linkofproof{thm:apgd_approach_complexity}
For each iteration, the time complexity of \cref{alg:sparse_acceleration} is the cost of the \apgd{} subroutine plus the full gradient computation in Line \ref{line:expanding_S_t}. By \cref{prop:apgd_approach}, \apgd{} is called at most $\mathbb{T}=\sparsity$ times and it runs for $\cO\mathopen{}\mathclose\bgroup\originalleft(\sqrt{\frac{\L}{\alpha}} \log\mathopen{}\mathclose\bgroup\originalleft(\frac{(\L-\alpha)\|\nabla_{\St[t]}\g(\xt[t])\|_2^2}{\hatepsilon_t\alpha^2}\aftergroup\egroup\originalright) \aftergroup\egroup\originalright)$ iterations at each stage $t$. One iteration of \apgd{} involves the computation of the gradient restricted to the current subspace of good coordinates, and involves the update of the iterates, costing $\bigo{\intvol(\suppast)}$. The computation of the full gradient takes $\bigo{\vol(\suppast)}$ operations.
Thus, the total running time of \cref{alg:sparse_acceleration} is
\begin{align*}
&\bigol{\sparsity \intvol(\suppast) \sqrt{\frac{\L}{\alpha}}\log\mathopen{}\mathclose\bgroup\originalleft(\frac{(\L-\alpha)\max_{t\in \{0, 1, \ldots, \mathbb{T}-1\}}\|\nabla_{\St[t]}\g(\xt[t])\|_2^2}{\hatepsilon_t\alpha^2} \aftergroup\egroup\originalright) + \sparsity \vol(\suppast)} \\
& \circled{1}[=] \bigol{\sparsity \intvol(\suppast) \sqrt{\frac{\L}{\alpha}}\log\mathopen{}\mathclose\bgroup\originalleft(\frac{\L^2(\L-\alpha)\norm{\ensuremath{\mathbb{0}}-\xast}^2}{\hatepsilon_t\alpha^2}\aftergroup\egroup\originalright) + \sparsity \vol(\suppast)}\\
& = \bigotildel{\sparsity \intvol(\suppast) \sqrt{\frac{\L}{\alpha}} + \sparsity \vol(\suppast)}, \end{align*}
where $\circled{1}$ holds since by $\L$-smoothness of $\g$ restricted to $\spann{\{\canonical[i]\mid i\in\St[t]\}}$ and by $\ensuremath{\mathbb{0}} \leq \xt[t]\leq \xtast[t]\leq \xast$ for all $t\in [\mathbb{T}]$, we have $\norm{\nabla_{\St[t]}\g(\xt[t])}_2^2 \leq \L\norm{\xt[t] -\xtast[t]}_2^2 \leq \L\norm{\ensuremath{\mathbb{0}} -\xast}_2^2$.
To further interpret the bound in the $\ell_1$-regularized PageRank problem, we can further bound \begin{align*}
\norm{\ensuremath{\mathbb{0}} -\xast}_2^2 & \leq \frac{1}{\alpha^2} \norm{\nabla_{\suppast} \g (\ensuremath{\mathbb{0}} ) - \nabla_{\suppast} \g (\xast )}_2^2& \text{$\triangleright$ by $\alpha$-str. convexity of $\g$ in $\spann{\{\canonical[i]\mid i\in\suppast\}}$}\\
& \leq \frac{1}{\alpha^2} \norm{\nabla_{\suppast} \g (\ensuremath{\mathbb{0}} )}_2^2 & \text{$\triangleright$ by optimality of $\xast$}\\
& \leq \frac{1}{\alpha^2}\norm{(-\alpha \mathcal{D}^{-1/2}{\mathbf{s}} + \alpha\rho \mathcal{D}^{1/2}\ensuremath{\mathbb{1}})_{\suppast}}_2^2 & \text{$\triangleright$ by the gradient definition}\\
&\leq \frac{1}{\alpha^2}\norm{(-\mathcal{D}^{-1/2}\ensuremath{\mathbb{1}}+\mathcal{D}^{1/2}\ensuremath{\mathbb{1}})_{\suppast}}_2^2 & \text{$\triangleright$ $\alpha, {\mathbf{s}}[i], \rho \leq 1$}\\
&\leq \frac{1}{\alpha^2}(1 + \sqrt{\vol(\suppast)})^2 \sparsity& \text{$\triangleright$ maximum $d_i$ for $i\in\suppast$ is $\leq \card{\vol(\suppast)}$}\\
&= \bigol{\frac{1}{\alpha^2}\abs{\suppast}\vol(\suppast)}. \end{align*}
Then, by the definition of $\hatepsilon_t$, the time complexity of \cref{alg:sparse_acceleration} of the $\ell_1$-regularized PageRank problem is
\begin{align*}
&\bigol{| \suppast |\intvol(\suppast)\sqrt{\frac{\L}{\alpha}}\log\mathopen{}\mathclose\bgroup\originalleft(\frac{2L^4 (1+\card{\St[t]})(\L-\alpha) | \suppast | \vol(\suppast)}{\alpha^6\epsilon}\aftergroup\egroup\originalright) + \sparsity\vol(\suppast)}. \end{align*}
The space complexity of \cref{alg:sparse_acceleration} is dominated by the cost of storing the gradient $\nabla_{\St[t]} \g(\xt[t])$, which is $\bigo{\sparsity}$, since $\St[t] \subseteq \suppast$. Note that we require to compute the full gradient when updating $\St[t+1]$, but we only store the new indices. \end{proof}
\let\xtast\oldxtast \let\xt\oldxt \let\St\oldSt \let\Ct\oldCt \let\Sinit\oldSinit
}
\begin{proof}\linkofproof{lemma:modification}
Fix $j \not\in S$ such that $\nabla_j \g({\mathbf{x}}) < 0$. By the assumption $x_i = 0$ if $i\not\in S$ and $\nabla_i \g({\mathbf{x}}) \leq 0$ if $i\in S$, and therefore we can use \cref{proposition:pgd_helper} with $S$ and ${\mathbf{x}}$ to conclude $\ensuremath{\mathbb{0}} \leq {\mathbf{x}} \leq \xtast[C]$ and $\nabla_j \g(\xtast[C]) = 0$. We can thus write ${\mathbf{x}} = \xtast[C] - \sum_{i\in S}\omega_i \canonical[i]$, for $\omega_i \in\mathbb{R}_{\geq 0}$ for all $i\in S$. By \cref{lemma:A_positive}, it holds that $\nabla_j \g(\xtast[C]) \leq \nabla_j \g({\mathbf{x}}) < 0$. We now use \cref{property:pgd_helper_positivity,property:pgd_helper_subset} in \cref{proposition:pgd_helper} with set of indices $\bar{S}\defi S \cup \{j\}$ and starting point $\xtast[C]$. By \cref{property:pgd_helper_positivity} we have for $\xtast[\bar{C}]=\argmin_{{\mathbf{x}}\in\bar{C}}\g({\mathbf{x}})$ that $\xtast[\bar{C}][j] > 0$, where $\bar{C} \defi \spann{\{\canonical[i] \mid i \in \bar{S}\}}$. By \cref{property:pgd_helper_monotone}, we have $\xtast[\bar{C}][i] \geq \xtast[C][i] > 0$ for $i \in S$. Thus, $\xtast[\bar{C}][i]> 0$ for all $i\in\bar{S}$ and by \cref{property:pgd_helper_subset}, it holds $\bar{S} \subseteq \suppast$ and in particular $j\in\suppast$.
\end{proof}
\section{Algorithmic Comparisons}\label{sec:algorithmic_comparisons} {\textnormal{\texttt{CDPR}}}\xspace{} has worse space complexity, $\cO(\sparsity^2)$, than \aspr{} and {\textnormal{\texttt{ISTA}}}\xspace{}, both $\cO(\sparsity)$. However, since {\textnormal{\texttt{CDPR}}}\xspace{} finds the exact solution, {\textnormal{\texttt{CDPR}}}\xspace{} outperforms the other methods in running time for small enough $\epsilon$. Note that the time complexities of {\textnormal{\texttt{ISTA}}}\xspace{} and \aspr{} depend on $\frac{1}{\epsilon}$ only logarithmicly. We perform the remaining comparison for $\log(1/\epsilon)$ treated as a constant, since it is so in practice. If \begin{align*}
\frac{\L}{\alpha} > \max\mathopen{}\mathclose\bgroup\originalleft\{\frac{\sparsity^3}{\volast}, \sparsity \aftergroup\egroup\originalright\}, \end{align*} then {\textnormal{\texttt{CDPR}}}\xspace{} performs better than {\textnormal{\texttt{ISTA}}}\xspace{}, up to constants. Since $\volast\geq \sparsity$, this is, for example, satisfied when $\frac{\L}{\alpha} > \sparsity^2$. If \begin{align*}
\frac{\L}{\alpha} > \max\mathopen{}\mathclose\bgroup\originalleft\{\mathopen{}\mathclose\bgroup\originalleft(\frac{\sparsity\intvol(\suppast)}{\volast}\aftergroup\egroup\originalright)^2, \sparsity \aftergroup\egroup\originalright\}, \end{align*} then \aspr{} performs better than {\textnormal{\texttt{ISTA}}}\xspace{}, up to constants and log factors. This is, for example, satisfied when $\frac{\L}{\alpha} > \sparsity^2$ or when $\frac{\L}{\alpha} > \sparsity$ and $\volast > \sparsity^{5/2}$ since $\intvol(\suppast) \leq \sparsity^2$. If the convergence rates of {\textnormal{\texttt{CDPR}}}\xspace{} and \aspr{} are dominated by $\cO(\sparsity \volast)$, then the algorithms perform similarly. However, if the time complexities of {\textnormal{\texttt{CDPR}}}\xspace{} and \aspr{} are of orders $\cO(\sparsity^3)$ and $\cO(\sparsity \volast \sqrt{\frac{\L}{\alpha}})$, respectively, then {\textnormal{\texttt{CDPR}}}\xspace{} performs better than \aspr{} for \begin{align*}
\frac{\L}{\alpha} > \mathopen{}\mathclose\bgroup\originalleft(\frac{\sparsity^2}{\intvol(\suppast)}\aftergroup\egroup\originalright)^2, \end{align*} up to constants and log factors. We note that although \citet{fountoulakis2019variational} describe their method as using $\bigo{\vol(\suppast)}$ memory, their {\textnormal{\texttt{ISTA}}}\xspace{} solver actually only requires $\bigo{\sparsity}$ space, as it is enough to store the entries of the iterates and gradients corresponding to the good coordinates, whereas the gradient entries for bad coordinates can be discarded immediately after computation.
\end{document} | arXiv |
\begin{document}
\title[Dirichlet Series Associated to Cubic Fields with Given Quadratic Resolvent] {Dirichlet Series Associated to Cubic Fields with Given Quadratic Resolvent}
\author{Henri Cohen} \address{Universit\'e Bordeaux I, Institut de Math\'ematiques, U.M.R. 5251 du C.N.R.S, 351 Cours de la Lib\'eration, 33405 TALENCE Cedex, FRANCE} \email{[email protected]} \author{Frank Thorne} \address{Department of Mathematics, University of South Carolina, 1523 Greene Street, Columbia, SC 29208, USA} \email{[email protected]}
\begin{abstract}
Let $k$ be a quadratic field. We give an explicit formula for the Dirichlet series $\sum_K|\textnormal{Disc}(K)|^{-s}$, where the sum is over isomorphism classes of all cubic fields whose quadratic resolvent field is isomorphic to $k$.
Our work is a sequel to \cite{CM} (see also \cite{M}), where such formulas are proved in a more general setting, in terms of sums over characters of certain groups related to ray class groups. In the present paper we carry the analysis further and prove explicit formulas for these Dirichlet series over $\mathbb{Q}$, and in a companion paper we do the same for quartic fields having a given cubic resolvent.
As an application, we compute tables of the number of $S_3$-sextic fields $E$ with $|\textnormal{Disc}(E)| < X$, for $X$ ranging up to $10^{23}$. An accompanying PARI/GP implementation is available from the second author's website.
\end{abstract}
\maketitle \section{Introduction} A classical problem in algebraic number theory is that of {\itshape enumerating number fields} by discriminant. Let $N^{\pm}_d(X)$ denote the number of isomorphism classes of number fields $K$ with $\deg(K) = d$ and $0 < \pm \textnormal{Disc}(K) < X$. The quantity $N^{\pm}_d(X)$ has seen a great deal of study; see (for example) \cite{CDO, B_icm, T_four} for surveys of classical and more recent work.
It is widely believed that $N^{\pm}_d(X) = C^{\pm}_d X + o(X)$ for all $d\ge2$. For $d = 2$ this is classical, and the case $d = 3$ was proved in 1971 work of Davenport and Heilbronn \cite{DH}. The cases $d = 4$ and $d = 5$ were proved much more recently by Bhargava \cite{B4, B5}. In addition, Bhargava in \cite{B_conj} also conjectured a value of the constants $C^{\pm}_{d,S_d}$ for $d > 5$, where the additional index $S_d$ means that one counts only degree $d$ number fields with Galois group of the Galois closure isomorphic to $S_d$.
Related questions have also seen recent attention. For example, Belabas \cite{Bel} developed and implemented a fast algorithm to compute large tables of cubic fields, which has proved essential for subsequent numerical computations (including one to be carried out in this paper!) Based on Belabas's data, Roberts \cite{R} conjectured the existence of a secondary term of order $X^{5/6}$ in $N^{\pm}_3(X)$ and this was proved (independently, and using different methods) by Bhargava, Shankar, and Tsimerman \cite{BST}, and by Taniguchi and the second author \cite{TT}. Further details and references can be found in the survey papers above.
In the present paper we study cubic fields from a different angle. In 1954 Cohn \cite{Cohn} studied {\itshape cyclic} cubic fields and proved that \begin{equation}\label{eqn_cohn} \sum_{K \ \textnormal{cyclic}} \frac{1}{\textnormal{Disc}(K)^s} = - \frac{1}{2} + \frac{1}{2} \bigg( 1 + \frac{1}{3^{4s}} \bigg) \prod_{p \equiv 1 \pmod 6} \bigg(1 + \frac{2}{p^{2s}}\bigg)\;. \end{equation} To formulate a related question for noncyclic fields, fix a fundamental discriminant $D$. Given a noncyclic cubic field $K$, its Galois closure $\widetilde{K}$ has Galois group $S_3$ and hence contains a unique quadratic subfield $k$, called the {\itshape quadratic resolvent}. For a fixed $D$, let $\mathcal{F}(\mathbb{Q}(\sqrt{D}))$ be the set of all cubic fields $K$ whose quadratic resolvent field is $\mathbb{Q}(\sqrt{D})$. For any $K \in \mathcal{F}(\mathbb{Q}(\sqrt{D}))$ we have $\textnormal{Disc}(K) = D f(K)^2$ for some positive integer $f(K)$, and we define \begin{equation} \Phi_D(s) := \frac{1}{2} + \sum_{K \in \mathcal{F}(\mathbb{Q}(\sqrt{D}))} \frac{1}{f(K)^s}\;, \end{equation} where the constant $1/2$ is added to simplify the final formulas.
Motivated by Cohn's formula \eqref{eqn_cohn}, we may ask if $\Phi_D(s)$ can be given an explicit form.
The answer is yes, as was essentially shown by A.~Morra and the first author in \cite{CM}, using Kummer theory. They proved a very general formula enumerating relative cubic extensions of any base field. However, this formula is rather complicated, and it is not in a form which is immediately conducive to applications. In the present paper, we will show that this formula can be put in such a form when the base field is $\mathbb{Q}$; our formula (Theorem \ref{thm_main_cubic}) is similar to \eqref{eqn_cohn} but involves one additional Euler product for each cubic field of discriminant $-D/3$, $-3D$, or $-27D$.
\subsection{An application} One application of our result is to enumerating $S_3$-sextic field extensions, i.e., sextic field extensions $\widetilde{K}$ which are Galois over $\mathbb{Q}$ with Galois group $S_3$. Suppose that $\widetilde{K}$ is such a field, where $K$ and $k$ are the cubic and quadratic subfields respectively, the former being defined only up to isomorphism. Then $k$ is the quadratic resolvent of $K$, and in addition to the formula $\textnormal{Disc}(K) = \textnormal{Disc}(k) f(K)^2$ we have \begin{equation} \textnormal{Disc}(\widetilde{K}) = \textnormal{Disc}(K)^2 \textnormal{Disc}(k) = \textnormal{Disc}(k)^3 f(K)^4. \end{equation} so that our formulas may be used to count all such $\widetilde{K}$ of bounded discriminant, starting from Belabas's tables \cite{Bel} of cubic fields. Ours is not the only way to enumerate such $\widetilde{K}$, but it is straightforward to implement and it seems to be (roughly) the most efficient.
We implemented this algorithm using PARI/GP \cite{pari} to
compute counts of $S_3$-sextic $\widetilde{K}$ with $|\textnormal{Disc}(\widetilde{K})| < 10^{23}$. In Section \ref{sec_computations} we present our data, and the accompanying code is available from the second author's website. \\ \\ {\bf Outline of the paper.} In Section \ref{sec_cubic_intro} we introduce our notation and give the main results. In Section \ref{sec_cubic_prelim} we summarize the work of Morra and the first author \cite{CM}, and prove several propositions which will be needed for the proof of the main result. Our work relies heavily on work of Ohno \cite{Ohno} and Nakagawa \cite{N}, establishing an identity for binary cubic forms. In the same section, we give a result (Proposition \ref{case22}) which controls the splitting type of the prime $3$ in certain cubic extensions, and illustrates an application of Theorem \ref{thm_main_cubic}. Finally, in Section \ref{sec_cm} we prove Theorem \ref{thm_main_cubic}, using the main theorem of \cite{CM}, recalled as Theorem \ref{theorem61}, as a starting point. In Section \ref{sec_examples} we give some numerical examples which were helpful in double-checking our results, and in Section \ref{sec_computations} we describe our computation of $S_3$-sextic fields.
\section*{Acknowledgments} The authors would like to thank Karim Belabas, Franz Lemmermeyer, Guillermo Mantilla-Soler, and Simon Rubinstein-Salzedo, among many others, for helpful discussions related to the topic of this paper. We would especially like to thank the anonymous referee of \cite{TT}, who (indirectly) suggested the application to counting $S_3$-sextic fields.
\section{Statement of Results}\label{sec_cubic_intro}
We begin by introducing some notation. In what follows, by abuse of language we use the term ``cubic field'' to mean ``isomorphism class of cubic number fields''.
\begin{definition}\label{def_omegal} Let $E$ be a cubic field. For a prime number $p$ we set $$\omega_E(p)=\begin{cases} -1&\text{\quad if $p$ is inert in $E$\;,}\\ 2&\text{\quad if $p$ is totally split in $E$\;,}\\ 0&\text{\quad otherwise.} \end{cases}$$ \end{definition}
\begin{remarks}
\begin{enumerate} \item We have $\omega_E(p) = \chi(\sigma_p)$, where $\chi$ is the character of the standard representation of $S_3$, and $\sigma_p$ is the Frobenius element of $E$ at $p$. \item Note that we have $\omega_E(p)=0$ if and only if $\leg{\textnormal{Disc}(E)}{p}\ne1$, and since in all cases that we will use we have $\textnormal{Disc}(E)=-D/3$, $-3D$, or $-27D$ for some fundamental discriminant $D$, for $p\ne3$ this is true if and only if $\leg{-3D}{p}\ne1$. Thus, in Euler products involving the quantities $1+\omega_E(p)/p^s$ we can either include all $p\ne3$, or restrict to $\leg{-3D}{p}=1$. \end{enumerate} \end{remarks} \begin{definition}
\begin{enumerate}\item Let $D$ be a fundamental discriminant (including $1$). We let $D^*$ be the discriminant of the {\itshape mirror field} $\mathbb{Q}(\sqrt{-3D})$, so that $D^* = -3D$ if $3 \nmid D$
and $D^* = -D/3$ if $3 | D$. \item For any fundamental discriminant $D$ we denote by ${\text {\rm rk}}_3(D)$ the $3$-rank of the class group of the field $\mathbb{Q}(\sqrt{D})$. \item For any integer $N$ we let ${\mathcal L}_N$ be the set of cubic fields of discriminant $N$. We will use the notation ${\mathcal L}_N$ only for $N=D^*$ or $N=-27D$, with $D$ a fundamental discriminant. \item If $K_2=\mathbb{Q}(\sqrt{D})$ with $D$ fundamental we denote by $\mathcal{F}(K_2)$ the set of cubic fields with resolvent field equal to $K_2$, or equivalently, with discriminant of the form $Df^2$.
\item With a slight abuse of notation, we let $${\mathcal L}_3(K_2)={\mathcal L}_3(D)={\mathcal L}_{D^*}\cup{\mathcal L}_{-27D}.$$ \end{enumerate}\end{definition}
\begin{remark} Scholz's theorem tells us that for $D<0$ we have $0\le {\text {\rm rk}}_3(D)-{\text {\rm rk}}_3(D^*)\le1$ (or equivalently that for $D>0$ we have $0\le {\text {\rm rk}}_3(D^*)-{\text {\rm rk}}_3(D)\le1$), and gives also a necessary and sufficient condition for ${\text {\rm rk}}_3(D)={\text {\rm rk}}_3(D^*)$ in terms of the fundamental unit of the real field. \end{remark}
\begin{definition} As in the introduction, for any fundamental discriminant $D$ we define the Dirichlet series \begin{equation} \Phi_D(s) := \frac{1}{2} + \sum_{K \in \mathcal{F}(\mathbb{Q}(\sqrt{D}))} \frac{1}{f(K)^s}. \end{equation} \end{definition}
Our main theorem is as follows:
\begin{theorem}\label{thm_main_cubic} For any fundamental discriminant $D$ we have \begin{equation}\label{eqn_main_cubic} c_D\Phi_D(s)=\dfrac{1}{2}M_1(s)\prod_{\leg{-3D}{p}=1}\left(1+\dfrac{2}{p^s}\right) +\sum_{E\in{\mathcal L}_3(D)}M_{2,E}(s)\prod_{\leg{-3D}{p}=1}\left(1+\dfrac{\omega_E(p)}{p^s}\right)\;,\end{equation} where $c_D=1$ if $D = 1$ or $D < -3$, $c_D=3$ if $D = -3$ or $D > 1$, and the $3$-Euler factors $M_1(s)$ and $M_{2,E}(s)$ are given in the following table. \end{theorem}
\centerline{
\begin{tabular}{|c||c|c|c|} \hline Condition on $D$ & $M_1(s)$ & $M_{2,E}(s),\ E\in{\mathcal L}_{D^*}$ & $M_{2,E}(s),\ E\in{\mathcal L}_{-27D}$\\ \hline\hline $3\nmid D$ & $1+2/3^{2s}$ & $1+2/3^{2s}$ & $1-1/3^{2s}$\\ \hline $D\equiv3\pmod9$ & $1+2/3^s$ & $1+2/3^s$ & $1-1/3^s$\\ \hline $D\equiv6\pmod9$ & $1+2/3^s+6/3^{2s}$ & $1+2/3^s+3\omega_E(3)/3^{2s}$ & $1-1/3^s$\\ \hline \end{tabular}}
\begin{remarks}
\begin{enumerate}\item When $D\equiv3\pmod9$ we have $D^*\equiv2\pmod3$, so $3$ is partially split in any cubic field of discriminant $D^*$. It follows that when $E\in{\mathcal L}_{D^*}$ we have $M_{2,E}(s)=1+2/3^s+3\omega_E(3)/3^{2s}$ for all $D$ such that $3\mid D$. \item When $3\nmid D$ there are no terms for $1/3^s$, in accordance with Proposition \ref{prop_disc_vals} below. \item In the terms involving $E\in{\mathcal L}_{D^*}$ the condition $\leg{-3D}{p}=1$ can be replaced by $p\ne3$ and even omitted altogether if $3\nmid D$, and in the terms involving $E\in{\mathcal L}_{-27D}$ it can be omitted. \item The case $D = 1$ is the formula \eqref{eqn_cohn} of Cohn mentioned previously, and the case $D = -3$ was proved by Morra and the first author \cite{CM}. The paper \cite{CM} also proves \eqref{eqn_main_cubic} when $D < 0$ and $3 \nmid h(D)$, in which case $\mathcal{L}_3(D) = \emptyset$. In her thesis \cite{M}, Morra also proves some special cases of an analogue of \eqref{eqn_main_cubic} for cubic extensions of imaginary quadratic fields. Finally, one additional case of \eqref{eqn_main_cubic} was proved in \cite{T_no_ep}, with an application to Shintani zeta functions. \end{enumerate} \end{remarks}
\section{Preliminaries}\label{sec_cubic_prelim} We briefly summarize the work of \cite{CM}, and introduce some further notation which will be needed in the proof. We assume from now on that $D \neq 1, -3$; these cases are similar but simpler, and are already handled in \cite{CM, M} (and the case $D = 1$ in \cite{Cohn}).
Suppose that $K/\mathbb{Q}$ is a cubic field of discriminant $D n^2$, where $D \not \in \{ 1, -3 \}$ is a fundamental discriminant, and let $N$ be the Galois closure of $K$. Then $N(\sqrt{-3})$ is a cyclic cubic extension of $L := \mathbb{Q}(\sqrt{D}, \sqrt{-3})$, and Kummer theory implies that $N(\sqrt{-3}) = L(\alpha^{1/3})$ for some $\alpha \in L$. We write (following \cite[Remark 2.2]{CM}) \begin{equation}\label{eq:glq} {\text {\rm Gal}}(L/\mathbb{Q}) = \{1, \ \tau, \ \tau_2, \ \tau \tau_2 \}, \end{equation} where $\tau, \ \tau_2, \ \tau \tau_2$ fix $\sqrt{D}, \ \sqrt{-3}, \ \sqrt{-3D}$ respectively.
The starting point of \cite{CM} is a correspondence between such fields $K$ and such elements $\alpha$. In particular, isomorphism classes of such $K$ are in bijection with equivalence classes of elements $1 \neq \overline{\alpha} \in L^{\times}/(L^{\times})^3$, with $\alpha$ identified with its inverse, such that $\alpha \tau'(\alpha) \in (L^{\times})^3$ for $\tau' \in \{ \tau, \tau_2 \}$. We express this by writing (as in \cite[Definition 2.3]{CM}) $\overline{\alpha} \in (L^{\times}/(L^{\times})^3)[T]$, where $T \subseteq \mathbb{F}_3[{\text {\rm Gal}}(L/\mathbb{Q})]$ is defined by $T = \{ \tau + 1, \tau_2 + 1 \}$, and the notation $[T]$ means that $\overline{\alpha}$ is annihilated by $T$.
To go further, we introduce the following definition:
\begin{definition}\label{defsel} Let $k$ be a number field and $\ell$ be a prime. \begin{enumerate}\item We say that an element $\alpha\in k^*$ is an $\ell$-virtual unit if $\alpha\mathbb{Z}_k=\mathfrak q^\ell$ for some ideal $\mathfrak q$ of $k$, or equivalently, if $v_{\mathfrak p}(\alpha)\equiv0\pmod{\ell}$ for all prime ideals $\mathfrak p$, and we denote by $V_{\ell}(k)$ the group of $\ell$-virtual units. \item We define the $\ell$-Selmer group $S_{\ell}(k)$ of $k$ as $S_{\ell}(k)=V_{\ell}(k)/{k^*}^{\ell}$. \end{enumerate}\end{definition}
Using this definition, it is immediate to see that the bijection described above induces a bijection between fields $K$ as above, and triples $(\mathfrak{a}_0, \mathfrak{a}_1, \overline{u})$ (up to equivalence with the triple $(\mathfrak{a}_1, \mathfrak{a}_0, 1/\overline{u})$), satisfying the following:
\begin{enumerate} \item $\mathfrak{a}_0$ and $\mathfrak{a}_1$ are coprime integral squarefree ideals of $L$ such that $\mathfrak{a}_0 \mathfrak{a}_1^2 \in (I/I^3)[T]$ (where $I$ is the group of fractional ideals of $L$), and $\overline{\mathfrak{a}_0 \mathfrak{a}_1^2} \in {\text {\rm Cl}}(L)^3$. \item $\overline{u}\in S_3(L)[T]$, and $\overline{u}\ne1$ if $\mathfrak{a}_0=\mathfrak{a}_1=\mathbb{Z}_L$. \end{enumerate}
Indeed, given $\alpha$ such that $N(\sqrt{-3}) = L(\alpha^{1/3})$, we can write uniquely $\alpha=\mathfrak{a}_0\mathfrak{a}_1^2\mathfrak{q}^3$ with $\mathfrak{a}_0$ and $\mathfrak{a}_1$ coprime integral squarefree ideals, and since $\overline{\alpha} \in (L^{\times}/(L^{\times})^3)[T]$ and the ideal class of $\mathfrak{a}_0\mathfrak{a}_1^2$ is equal to that of $\mathfrak{q}^{-3}$, the conditions on the ideals are satisfied. Conversely, given a triple as above, we write $\mathfrak{a}_0 \mathfrak{a}_1^2 \mathfrak{q}_0^3 = \alpha_0 \mathbb{Z}_L$ for some $\alpha_0 \in (L^* / (L^*)^3)[T]$ and some ideal $\mathfrak{q}_0$. Then $K$ is the cubic subextension of $L(\sqrt[3]{\alpha_0 u})$, for any lift $u$ of $\overline{u}$.
It is easy to see that $\mathfrak{a}_0 \mathfrak{a}_1 = \mathfrak{a}_{\alpha} \mathbb{Z}_L$ for some ideal $\mathfrak{a}_{\alpha}$ of $\mathbb{Q}(\sqrt{D})$, and the conductor $\mathfrak{f}(K(\sqrt{D})/\mathbb{Q}(\sqrt{D}))$ is equal to $\mathfrak{a}_{\alpha}$ apart from a complicated $3$-adic factor. Furthermore, $\mathfrak{f}(K(\sqrt{D})/\mathbb{Q}(\sqrt{D})) = f(K) \mathbb{Z}_{\mathbb{Q}(\sqrt{D})}$, and the Dirichlet series for $\Phi_D(s)$ consists of a sum involving the norms of ideals $\mathfrak{a}_0$ and $\mathfrak{a}_1$ satisfying the conditions above. The condition $\overline{\mathfrak{a}_0 \mathfrak{a}_1^2} \in {\text {\rm Cl}}(L)^3$ may be detected by summing over characters of ${\text {\rm Cl}}(L)/{\text {\rm Cl}}(L)^3$, suggesting that cubic fields $K$ can be counted in terms of unramified abelian cubic extensions of $L$.
Due to the $3$-adic complications, the formula (Theorem 6.1 of \cite{CM}) in fact involves a sum over characters of the group \begin{equation}\label{def:gb} G_{\mathfrak{b}} := \frac{{\text {\rm Cl}}_{\mathfrak{b}}(L)}{({\text {\rm Cl}}_{\mathfrak{b}}(L))^3}[T] \end{equation} for $\mathfrak{b} \in \mathcal{B}:=\{ (1), (\sqrt{-3}), (3), (3 \sqrt{-3}) \}$. More precisely, in the case considered here where the base field is $\mathbb{Q}$, Theorem 6.1 of \cite{CM} specializes to the following (see also \cite{M})\footnote{Note that we slightly changed the definition of $F(\mathfrak{b},\chi,s)$ given in \cite{CM} when $\mathfrak{b}=(1)$ and $3\mid D$.}:
\begin{theorem}\label{theorem61} If $D\notin\{ 1, -3\}$ we have $$\Phi_D(s)=\dfrac{3}{2c_D}\sum_{\mathfrak{b}\in\mathcal{B}}A_{\mathfrak{b}}(s)\sum_{\chi\in\widehat{G_{\mathfrak{b}}}}\omega_{\chi}(3)F(\mathfrak{b},\chi,s)\;,$$ where $c_D=1$ if $D<0$, $c_D=3$ if $D>0$, the $A_{{\mathfrak b}}(s)$ are given by the following table,
\centerline{
\begin{tabular}{|c||c|c|c|c|} \hline Condition on $D$ & $A_{(1)}(s)$ & $A_{(\sqrt{-3})}(s)$ & $A_{(3)}(s)$ & $A_{(3\sqrt{-3})}(s)$\\ \hline\hline $3\nmid D$ & $3^{-2s}$ & $0$ & $-3^{-2s-1}$ & $1/3$\\ \hline $D\equiv3\pmod9$ & $0$ & $3^{-3s/2}$ & $3^{-s}-3^{-3s/2}$ & $(1-3^{-s})/3$\\ \hline $D\equiv6\pmod9$ & $3^{-2s}$ & $3^{-3s/2}$ & $3^{-s}-3^{-3s/2}$ & $(1-3^{-s})/3$\\ \hline \end{tabular}}
$$F(\mathfrak{b},\chi,s)=\prod_{\leg{-3D}{p}=1}\left(1+\dfrac{\omega_{\chi}(p)}{p^s}\right)\;,$$ where if we write $p\mathbb{Z}_L=\mathfrak{c}\tau(\mathfrak{c})$ (with $\mathfrak{c}$ not necessarily prime), we set\footnote{Note that this fixes a small mistake in the statement of Theorem 6.1 of \cite{CM}, where the condition is described as $\chi(\mathfrak{c}) = \chi(\tau'(\mathfrak{c}))$. The conditions are equivalent whenever $p$ is a cube modulo $\mathfrak{b}$; if $\mathfrak{b} = (3 \sqrt{-3})$ and $p \not \equiv \pm 1 \ ({\text {\rm mod}} \ 9)$, then $p$ and $p \tau'(p)$ are not cubes in ${\text {\rm Cl}}_{\mathfrak{b}}(L)$ for $\tau' \in \{\tau, \tau_2\}$, and so the class of $p$ is not in $G_{\mathfrak{b}}$.} for $p\ne3$: $$\omega_{\chi}(p)=\begin{cases} 2&\text{\quad if $\chi(\tau(\mathfrak c)/\mathfrak c) = 1$}\;,\\ -1&\text{\quad if $\chi(\tau(\mathfrak c)/\mathfrak c) \neq 1$}\;,\end{cases}$$ and for $p=3$: $$\omega_{\chi}(3)=\begin{cases} 1&\text{\quad if $\mathfrak{b}\ne(1)$ or $\mathfrak{b}=(1)$ and $3\nmid D$}\;,\\ 2&\text{\quad if $\mathfrak{b}=(1)$, $3\mid D$, and $\chi(\tau(\mathfrak c)/\mathfrak c) = 1$}\;,\\ -1&\text{\quad if $\mathfrak{b}=(1)$, $3\mid D$, and $\chi(\tau(\mathfrak c)/\mathfrak c) \neq 1$}\;.\end{cases}$$ \end{theorem}
\begin{proof} We briefly explain how this follows from Theorem 6.1 of \cite{CM}. Warning: in the present proof we use the notation of \cite{CM} which conflicts somewhat with that of the present paper. All the definition, proposition, and theorem numbers are those of \cite{CM}.
\begin{itemize} \item We have $k=\mathbb{Q}$ so $[k:\mathbb{Q}]=1$, so $3^{(3/2)[k:\mathbb{Q}]s}=3^{3s/2}$. \item By Definition 3.6 we have ${\mathcal{P}}_3=\{3\}$ if $3\nmid D$ and $\emptyset$ if $3\mid D$, so $\prod_{p\in{\mathcal{P}}_3}p^{s/2}=3^{s/2}$ if $3\nmid D$ and $1$ if $3\mid D$. \item We have $k_z=\mathbb{Q}(\sqrt{-3})$, $K_2=\mathbb{Q}(\sqrt{D})$, and $L=\mathbb{Q}(\sqrt{-3},\sqrt{D})$, so by Lemma 5.4 we have
$|(U/U^3)[T]|=3^{r(U)}$ with $r(U)=2+0-1-\delta_{D>0}$, where $\delta$ is the Kronecker symbol, hence $3^{r(U)}=3/c_D$ with the notation of our theorem. \item By Definition 4.4, if $3\nmid D$ we have $\lceil {\mathcal N}(\mathfrak{b})\rceil=(1,*,3,3^2)$ while if $3\mid D$ we have $\lceil {\mathcal N}(\mathfrak{b})\rceil=(1,3^{1/2},3,3^{3/2})$ for $\mathfrak{b}=((1),(\sqrt{-3}),(3),(3\sqrt{-3}))$ respectively. Note that we use the convention of Definition 4.1, so that $\lceil {\mathcal N}(\mathfrak{b})\rceil$ can be the square root of an integer. \item By Definition 4.4 we have ${\mathcal N}({\mathfrak r}^e(\mathfrak{b}))=1$ unless $\mathfrak{b}=(1)$ and $3\mid D$, in which case ${\mathcal N}({\mathfrak r}^e(\mathfrak{b}))=3^{1/2}$ (where we again use the convention of Definition 4.1). \item By Proposition 2.10 we have $\mathcal{D}_3=\emptyset$ (hence ${\mathfrak d}_3=1$) unless $D\equiv6\pmod9$, in which case $\mathcal{D}_3=\{3\}$ (hence ${\mathfrak d}_3=3$), and for $p\ne3$ we have $p\in\mathcal{D}$ if and only if $\leg{-3D}{p}=1$. In particular ${\mathfrak r}^e(\mathfrak{b})\nmid{\mathfrak d}_3$ if and only if $\mathfrak{b}=(1)$ and $D\equiv3\pmod9$. \item By Definition 4.5, if $3\nmid D$ we have $P_{\mathfrak{b}}(s)=(1,*,-3^{-s},1)$ while if $3\mid D$ we have $P_{\mathfrak{b}}(s)=(1,3^{-s/2},3^{-s/2}-3^{-s},1-3^{-s})$ for $\mathfrak{b}=((1),(\sqrt{-3}),(3),(3\sqrt{-3}))$ respectively\footnote{Note that there is a misprint in Definition 4.5 of \cite{CM}: when $e(\mathfrak p_z/\mathfrak p)=1$ and $b=0$, we must set $Q((p\mathbb{Z}_{K_2})^b,s)=1$ and not $0$. The vanishing of certain terms in the final sum comes from the condition ${\mathfrak r}^e(\mathfrak{b})\mid{\mathfrak d}_3$ of the theorem.}. \item By Lemma 5.6, if $3\nmid D$ we have
$|(Z_{\mathfrak{b}}/Z_{\mathfrak{b}}^3)[T]|=(1,*,3,3)$ while if $3\mid D$ we have
$|(Z_{\mathfrak{b}}/Z_{\mathfrak{b}}^3)[T]|=(1,1,1,3)$ for $\mathfrak{b}=((1),(\sqrt{-3}),(3),(3\sqrt{-3}))$ respectively. \end{itemize} (In the above we put $*$ whenever $3\nmid D$ and $\mathfrak{b}=(\sqrt{-3})$ since this case is impossible.)
The theorem now follows immediately from Theorem 6.1 of \cite{CM}. \end{proof}
For future reference, note the following lemma whose trivial proof is left to the reader:
\begin{lemma}\label{lemomchi} Let $\chi$ be a cubic character, and as above write $p\mathbb{Z}_L=\mathfrak{c}\tau(\mathfrak{c})$. The following conditions are equivalent: \begin{enumerate} \item $\chi(\tau(\mathfrak c)/\mathfrak c) = 1$. \item $\omega_{\chi}(p)=2$. \item $\chi(p\mathfrak{c})=1$. \end{enumerate} If these conditions are not satisfied we have $\omega_{\chi}(p)=-1$. \end{lemma}
To proceed further, we need to compute the size of the groups $G_{\mathfrak{b}}$ and to reinterpret the conditions involving $\chi$ as conditions involving the cubic field associated to each pair $(\chi,\ov{\chi})$.
In what follows we write \begin{equation}\label{def:ha} H_{\mathfrak{a}} := \frac{{\text {\rm Cl}}_{\mathfrak{a}}(D^*)}{({\text {\rm Cl}}_{\mathfrak{a}}(D^*))^3}[1 + \tau]\;, \ \ \ H'_{\mathfrak{a}} := \frac{{\text {\rm Cl}}_{\mathfrak{a}}(D)}{({\text {\rm Cl}}_{\mathfrak{a}}(D))^3}[1 + \tau']\;, \end{equation} where $\tau$ and $\tau'$ are the nontrivial elements of ${\text {\rm Gal}}(\mathbb{Q}(\sqrt{D^*})/\mathbb{Q})$ and ${\text {\rm Gal}}(\mathbb{Q}(\sqrt{D})/\mathbb{Q})$ respectively, and ${\text {\rm Cl}}_{\mathfrak{a}}(N)$ is shorthand for ${\text {\rm Cl}}_{\mathfrak{a}}(\mathbb{Q}(\sqrt{N}))$. This $\tau$ is the restriction of $\tau \in {\text {\rm Gal}}(L/\mathbb{Q})$ (see \eqref{eq:glq}) to $\mathbb{Q}(\sqrt{D^*})$, and we regard $\tau$ as an automorphism of both $L$ and of $\mathbb{Q}(\sqrt{D^*})$ (and $\tau'$ is the restriction of $\tau_2$, but we prefer calling it $\tau'$).
\begin{proposition}\label{prop_g_size} We have \begin{equation}\label{eqn_g1} G_{\mathfrak{b}} \simeq H_{(a)}, \end{equation} where \begin{itemize}
\item $a = 1$ if $\mathfrak{b} = (1)$ or $(\sqrt{-3})$, or if $\mathfrak{b} = (3)$ and $3 | D$; \item $a = 3$ if $\mathfrak{b} = (3)$ or $(3 \sqrt{-3})$, and $3 \nmid D$;
\item $a = 9$ if $\mathfrak{b} = (3 \sqrt{-3})$ and $3 | D$. \end{itemize} \end{proposition}
\begin{remarks}
\begin{enumerate} \item Later we will associate a cubic field of discriminant $-D/3$, $-3D$, or $-27D$ to each pair of conjugate nontrivial characters of $G_{\mathfrak{b}}$. Propositions \ref{prop_g_size} and \ref{prop_count_cf} will show that we obtain all such fields in this manner. \item Propositions \ref{prop_g_size}, \ref{prop_count_cf}, and
\ref{prop_disc_vals} imply equalities among $|H_{(3^n)}|$ for different values of $n$. In particular, $|H_{(3^n)}| = |H_{(9)}|$ for $n > 2$, and if $3 | D$
then $|H_{(3)}| = |H_{(1)}|$ as well. \end{enumerate} \end{remarks}
\begin{proof} For $\mathfrak{b} = (1)$, $G_{\mathfrak{b}}$ is just $\big( {\text {\rm Cl}}(L) / {\text {\rm Cl}}(L)^3 \big)[T]$. This case may be handled with the others, but for clarity we describe it first.
By arguments familiar in the proof of the Scholz reflection principle (see, e.g. \cite{Wa}, p. 191) we have \begin{equation}\label{eqn_scholz} {\text {\rm Cl}}(L) / {\text {\rm Cl}}(L)^3 \simeq {\text {\rm Cl}}(D) / {\text {\rm Cl}}(D)^3 \oplus {\text {\rm Cl}}(D^*) / {\text {\rm Cl}}(D^*)^3 \end{equation} as ${\text {\rm Gal}}(L/\mathbb{Q})$-modules. Since $\tau$ acts trivially on $\mathbb{Q}(\sqrt{D})$ we have $({\text {\rm Cl}}(D)/{\text {\rm Cl}}(D)^3)[1+\tau]=1$ hence $({\text {\rm Cl}}(D)/{\text {\rm Cl}}(D)^3)[T]=1$, and since $\tau_2$ acts nontrivially on ${\text {\rm Cl}}(D^*)/{\text {\rm Cl}}(D^*)^3$, $1+\tau_2$ acts as the norm, which annihilates the class group, so $({\text {\rm Cl}}(D^*)/{\text {\rm Cl}}(D^*)^3)[T]={\text {\rm Cl}}(D^*)/{\text {\rm Cl}}(D^*)^3[1+\tau]$, so finally \begin{equation} \big({\text {\rm Cl}}(L)/{\text {\rm Cl}}(L)^3\big)[T] = {\text {\rm Cl}}(D^*) / {\text {\rm Cl}}(D^*)^3[1+\tau] = H_{(1)}\;, \end{equation} as desired (note that since $\tau$ also acts nontrivially we have in fact ${\text {\rm Cl}}(D^*) / {\text {\rm Cl}}(D^*)^3[1+\tau]={\text {\rm Cl}}(D^*) / {\text {\rm Cl}}(D^*)^3$).
Now suppose that $\mathfrak{b}$ is equal to $(\sqrt{-3})$, $(3)$, or $(3 \sqrt{-3})$. As $\tau$ and $\tau_2$ both act nontrivially on $G_{\mathfrak{b}}$, $\tau \tau_2$ acts trivially. Moreover $(1 + \tau \tau_2)/2 \in \mathbb{F}_3[{\text {\rm Gal}}(L/\mathbb{Q})]$ is an idempotent, so the elements of $G_{\mathfrak{b}}$ are those that may be represented by an ideal of the form $\mathfrak{a} \tau \tau_2(\mathfrak{a})$, which is necessarily of the form $\mathfrak{a}' \mathbb{Z}_L$ for an ideal $\mathfrak{a}'$ of $\mathbb{Q}(\sqrt{D^*})$. When $\mathfrak{b} = (1)$ this yields an isomorphism $G_{(1)} \xrightarrow{\sim} H_{(1)}$, following \eqref{eqn_scholz}. When $\mathfrak{b} \ne (1)$, this yields an isomorphism $G_{\mathfrak{b}} \xrightarrow{\sim} H_{\mathfrak{a}'}$, where $\mathfrak{a}' := \mathfrak{b} \cap \mathbb{Z}_{\mathbb{Q}(\sqrt{D^*})}$. In this case the $(1 + \tau)$-invariance is no longer automatic.
For convenience we write $F := \mathbb{Q}(\sqrt{D^*})$ in the remainder of the proof. The ideal $\mathfrak{b} \cap \mathbb{Z}_F$ is simple to determine. If $3 | D$, then $3$ is unramified in $F$, and so $\mathfrak{b} \cap \mathbb{Z}_F$ is equal to $(1)$, $(3)$, $(3)$, $(9)$ for $\mathfrak{b} = (1)$, $(\sqrt{-3})$, $(3)$, $(3 \sqrt{-3})$ respectively, as desired. Moreover, in this case Propositions \ref{prop_count_cf} and \ref{prop_disc_vals} below imply that $H_{(3)} \simeq H_{(1)}$ (there are no cubic fields whose discriminants have $3$-adic valuation $2$).
If $3 \nmid D$, then write $(3) = \mathfrak{p}^2$ in $\mathbb{Z}_F$, and for $\mathfrak{b} = (1)$, $(\sqrt{-3})$, $(3)$, $(3 \sqrt{-3})$, $\mathfrak{b} \cap \mathbb{Z}_F$ is equal to $(1)$, $\mathfrak{p}$, $(3)$, $3 \mathfrak{p}$ respectively. We write down the {\itshape ray class group exact sequence} \begin{equation}\label{eqn_rcges} 1 \rightarrow \mathbb{Z}_F^{\times} / \mathbb{Z}_F^{\mathfrak{a}'} \rightarrow (\mathbb{Z}_F / \mathfrak{a}')^{\times} \rightarrow {\text {\rm Cl}}_{\mathfrak{a}'}(F) \rightarrow {\text {\rm Cl}}(F) \rightarrow 1, \end{equation} where $\mathbb{Z}_F^{\mathfrak{a}'}$ is the subgroup of units congruent to $1$ modulo $\mathfrak{a}'$. We take $3$-Sylow subgroups and take negative eigenspaces for the action of $\tau$ (i.e., write each $3$-Sylow subgroup $A$ as $A^+ \oplus A^-$, where $A^{\pm} := \{x \in A: \tau(x) = x^{\pm 1} \}$, and take $A^-$); these operations preserve exactness. If $A$ is the $3$-Sylow subgroup of ${\text {\rm Cl}}_{\mathfrak{a}'}(F)$, then $H_{\mathfrak{a}'}$ is isomorphic to
$(A/A^3)^- \simeq A^- / (A^-)^3$.
For $\mathfrak{b} = (1)$ or $(3)$, this finishes the proof. For $\mathfrak{b} = (\sqrt{-3})$, the $3$-part of $(\mathbb{Z}_F / \mathfrak{p})^{\times}$ is trivial; hence, the $3$-Sylow subgroups of ${\text {\rm Cl}}(F)$ and ${\text {\rm Cl}}_{\mathfrak{p}}(F)$ are isomorphic, so $H_{\mathfrak{p}} \simeq H_{(1)}$.
For $\mathfrak{b} = (3 \sqrt{-3})$, the $3$-Sylow subgroup of $(\mathbb{Z}_F/\mathfrak{p}^3)^{\times}$ is larger than that of $(\mathbb{Z}_F/3)^{\times}$ by a factor of $3$; however, the same is true of the positive eigenspace and hence not of the negative eigenspace. Therefore $H_{\mathfrak{p}^3} \simeq H_{(3)}$.
\end{proof}
We now state a well-known formula for counting cubic fields in terms of ray class groups.
\begin{proposition}\label{prop_count_cf} If $D$ is a fundamental discriminant other than $1$, $c$ is any nonzero integer, and $H'_{\mathfrak{a}}$ is defined as in \eqref{def:ha}, then we have \begin{equation}
\sum_{d | c} |\mathcal{L}_{D d^2}| = \frac{1}{2} \Big( \big| H'_{(c)} \big| - 1\Big). \end{equation} \end{proposition} \begin{proof} This is a combination of (1.3) and Lemma 1.10 of Nakagawa \cite{N}, and it is also a fairly standard fact from class field theory. The idea is that cubic extensions of $\mathbb{Q}(\sqrt{D^*})$ whose conductor divides $(c)$ correspond to subgroups of ${\text {\rm Cl}}_{(c)}(D^*)$ of index $3$, and that such a cubic extension descends to a cubic extension of $\mathbb{Q}$ if and only if it is in the kernel of $1 + \tau$. \end{proof}
We also have the following counting result, which relies on the deeper part of the work of Nakagawa \cite{N} and Ohno \cite{Ohno}. \begin{proposition}\label{prop_no} Let $D$ be a fundamental discriminant, and set $r={\text {\rm rk}}_3(D^*)$. \begin{enumerate}[(1.)]
\item Assume that $D < -3$. Then we have
\begin{displaymath}
(|\mathcal{L}_{D^*}|, |\mathcal{L}_{-27D}|) =
\left\{
\begin{array}{ll}
((3^r - 1)/2, 3^r) & {\rm if } \ {\text {\rm rk}}_3(D) = r + 1, \\
((3^r - 1)/2, 0) & {\rm if } \ {\text {\rm rk}}_3(D) = r. \\
\end{array}
\right. \end{displaymath}
In either case, $|\mathcal{L}_3(D)| = (3^{{\text {\rm rk}}_3(D)} - 1)/2$.
\item Assume that $D > 1$. Then we have
\begin{displaymath}
(|\mathcal{L}_{D^*}|, |\mathcal{L}_{-27D}|) =
\left\{
\begin{array}{ll}
((3^r - 1)/2, 0) & {\rm if } \ {\text {\rm rk}}_3(D) = r - 1, \\
((3^r - 1)/2, 3^r) & {\rm if } \ {\text {\rm rk}}_3(D) = r. \\
\end{array}
\right. \end{displaymath}
In either case, $|\mathcal{L}_3(D)| = (3^{{\text {\rm rk}}_3(D) + 1} - 1)/2$.
\end{enumerate}
\end{proposition}
\begin{proof}
The formulas for $|\mathcal{L}_{D^*}|$ follow from class field theory, as these count unramified cyclic cubic extensions of $\mathbb{Q}(\sqrt{-3D})$, which are in bijection
with subgroups of ${\text {\rm Cl}}(-3D)$ of index $3$. It therefore suffices to prove the stated formulas for $|\mathcal{L}_3(D)|$, and these follow from work of Nakagawa \cite{N}.
Recalling the notation in \eqref{def:ha}, Nakagawa proved \cite[Theorem 0.4]{N} that if $D < 0$, then
\begin{equation}
|H'_{(1)}| = |H_{(a)}|, \end{equation}
where $a = 3$ if $3 \nmid D$ and $a = 9$ if $3 | D$; and if $D > 0$ then \begin{equation}
3 |H'_{(1)}| = |H_{(a)}| \end{equation} with the same $a$.
By Proposition \ref{prop_count_cf}, these formulas are
equivalent to the stated formulas for $|\mathcal{L}_3(D)|$. \end{proof}
\begin{proposition}\label{prop_disc_vals} If $K$ is a cubic field then $v_3(\textnormal{Disc}(K))$ can only be equal to $0$, $1$, $3$, $4$, and $5$ in relative proportions $81/117, \ 27/117, \ 6/117, \ 2/117,$ and $1/117$, when the fields are ordered by increasing absolute value of their discriminant. \end{proposition}
\begin{proof} The proof that $v_3(\textnormal{Disc}(K))$ can take only the given values is classical; see Hasse \cite{Has}. The proportions follow from the proof of the Davenport-Heilbronn theorem. A convenient reference is Section 6.2 of \cite{TT}, where a table of these proportions is given in the context of ``local specifications''; these proportions also appear (in slightly less explicit form) in the earlier related literature. \end{proof}
Before proceeding to the proof of Theorem \ref{thm_main_cubic} we give the following application:
\begin{proposition}\label{case22}
\begin{enumerate}\item If $D < 0$ and $({\text {\rm rk}}_3(D),{\text {\rm rk}}_3(D^*))=(2,1)$, or $D > 0$ and $({\text {\rm rk}}_3(D),{\text {\rm rk}}_3(D^*))=(1,1)$ there exist a unique cubic field of discriminant $D^*$ and three cubic fields of discriminant $-27D$. \item If $D < 0$ and $({\text {\rm rk}}_3(D),{\text {\rm rk}}_3(D^*))=(2,2)$, or $D > 0$ and $({\text {\rm rk}}_3(D),{\text {\rm rk}}_3(D^*))=(1,2)$ there exist four cubic fields of discriminant $D^*$ and no cubic field of discriminant $-27D$.
In addition, if $3\nmid D$ then $3$ is partially ramified in the four cubic fields, if $D\equiv3\pmod9$ then $3$ is partially split in the four cubic fields, and if $D\equiv6\pmod9$ then $3$ is totally split in one of the four cubic fields and inert in the three others. \end{enumerate}\end{proposition}
\begin{proof} The first statements are special cases of Proposition \ref{prop_no}, the behavior of $3$ when $3\nmid D$ is classical (see \cite{Has}), and when $D\equiv3\pmod9$ the last statement is trivial since $D^*\equiv2\pmod3$.
For the case $D\equiv6\pmod9$ we use Theorem \ref{thm_main_cubic}. Writing out the 3-part of the theorem for the discriminant $D$, we see that \begin{equation}
| \mathcal{L}_{81D} | = 3 \biggl(1 + \sum_E \omega_E(3) \biggr)\;, \end{equation} where the sum ranges over the cubic fields $E$ of discriminant $-D/3$, and $\omega_E(3)$ is equal to $2$ if $3$ is totally split in $E$, and $-1$ if $3$ is inert in $E$. Therefore, if $3$ is totally split in $0$, $1$, $2$, $3$, or
$4$ of these fields then $|\mathcal{L}_{81D}|$ is equal to $-9$, $0$, $9$, $18$, or $27$. Obviously we can rule out the first possibility.
We first observe that $| \mathcal{L}_{9D}| = 9$, again by Theorem \ref{thm_main_cubic}.
By Proposition \ref{prop_count_cf},
\begin{equation}
| \mathcal{L}_{9D} | = 9 = \frac{1}{2} \Big( H'_{(3)} - H'_{(1)} \Big), \ \ \
| \mathcal{L}_{81D} | = \frac{1}{2} \Big( H'_{(9)} - H'_{(3)} \Big). \end{equation}
By assumption, $|H'_{(1)}| = 9$ and so $|H'_{(3)}| = 27$. Therefore, either $|H'_{(9)}| = 81$ and $| \mathcal{L}_{81D} | = 27$ or
$|H'_{(3)}| = 27$ and $| \mathcal{L}_{81D} | = 0$.
To rule out the former possibility, we again consider the exact sequence \eqref{eqn_rcges}, with $F = \mathbb{Q}(\sqrt{D^*})$ replaced by $\mathbb{Q}(\sqrt{D})$ and $\tau$ replaced by $\tau'$, and take $3$-Sylow subgroups and $(1 + \tau')$-invariants (preserving exactness). The $3$-rank of $(\mathbb{Z}_{\mathbb{Q}(\sqrt{D})}/ (9))^{\times}[1 + \tau]$ is equal to $1$, and so the $3$-rank of ${\text {\rm Cl}}_{(9)}(D)$ is at most $1$ more than that of ${\text {\rm Cl}}(D)$. In other words,
$|H'_{(9)}| \leq 3 |H'_{(1)}|$, but we saw previously that $|H'_{(3)}| = 3 |H'_{(1)}|$, so
$|H'_{(9)}| = 3 |H'_{(1)}|$ and $|\mathcal{L}_{81D}| = 0$ as desired. \end{proof}
\section{Proof of Theorem \ref{thm_main_cubic}}\label{sec_cm} Theorem \ref{thm_main_cubic} follows from a more general result of Morra and the first author (Theorem 6.1 of \cite{CM} and Theorem 1.6.1 of \cite{M}), given above as Theorem \ref{theorem61} in our case where the base field is $\mathbb{Q}$. To each character of the groups $G_{\mathfrak{b}}$ we use class field theory and Galois theory to uniquely associate a cubic field $E$. Some arithmetic involving discriminants, as well as a comparison to our earlier counting formulas, proves that these fields $E$ range over all fields in $\mathcal{L}_3(D)$. Finally, we apply Theorem \ref{theorem61} to obtain the correct Euler product for each $E$.
This section borrows from the first author's work in \cite{T_no_ep}, which established a particular case of Theorem \ref{thm_main_cubic} for an application to Shintani zeta functions.
\subsection{Construction of the Fields $E$.} We refer to the beginning of Section \ref{sec_cubic_prelim} for our notation and setup. The contribution of the trivial characters occurring in Theorem \ref{theorem61} being easy to compute (see below; it has also been computed in \cite{CM}), we must handle the {\itshape nontrivial} characters.
We relate these characters to cubic fields by means of the following. \begin{proposition}\label{prop_cubic_bij} For each $\mathfrak{b} \in \mathcal{B}$ there is a bijection between the set of pairs of nontrivial characters $(\chi, \overline{\chi})$ of $G_{\mathfrak{b}}$ and the following sets of cubic fields $E$: \begin{itemize}
\item If $\mathfrak{b} = (1)$ or $(\sqrt{-3})$, or if $\mathfrak{b} = (3)$ and $3 | D$, then all $E \in \mathcal{L}_{D^*}$. \item If $\mathfrak{b} = (3 \sqrt{-3})$, or if $\mathfrak{b} = (3)$ and $3 \nmid D$, then all $E \in \mathcal{L}_3(D) = \mathcal{L}_{D^*} \cup \mathcal{L}_{-27D} $. \end{itemize} Moreover, for each prime $p$ with $\big( \frac{-3D}{p} \big) = 1$, write $p \mathbb{Z}_L = \mathfrak{c} \tau(\mathfrak{c})$ as in \cite{CM}, where $\mathfrak{c}$ is not necessarily prime, and recall from Theorem \ref{theorem61} and Lemma \ref{lemomchi} the definition of $\omega_{\chi}(p)$. Under our bijection, the following conditions are equivalent: \begin{itemize} \item $\chi(p \mathfrak{c}) = 1$. \item The prime $p$ splits completely in $E$. \item $\omega_{\chi}(p) = 2$. \end{itemize} If these conditions are not satisfied then $p$ is inert in $E$ and $\omega_{\chi}(p) = -1$. \end{proposition}
\begin{proof} Define\footnote{We have followed the notation of \cite{CM} where practical, but the notations $G'_{\mathfrak{b}}, \ G''_{\mathfrak{b}}, \ E, \ E_1$ are used for the first time here and do not appear in \cite{CM}.} $G'_{\mathfrak{b}} := {\text {\rm Cl}}_{\mathfrak{b}}(L)/{\text {\rm Cl}}_{\mathfrak{b}}(L)^3$, so that $G'_{\mathfrak{b}}$ is a $3$-torsion group containing $G_{\mathfrak{b}}$. We have a canonical decomposition of $G'_{\mathfrak{b}}$ into four eigenspaces for the actions of $\tau$ and $\tau_2$, and we write \begin{equation}\label{eqn_nci} G'_{\mathfrak{b}} \simeq G_{\mathfrak{b}} \times G''_{\mathfrak{b}}, \end{equation} where $G''_{\mathfrak{b}}$ is the direct sum of the three eigenspaces other than $G_{\mathfrak{b}}$. Note that $G''_{\mathfrak{b}}$ will contain the classes of all principal ideals generated by rational integers coprime to $3$; any such class in the kernel of $T$ will necessarily be in ${\text {\rm Cl}}_{\mathfrak{b}}(L)^3$.
For any nontrivial character $\chi$ of $G_{\mathfrak{b}}$, let $B$ be its kernel, which has index $3$. We extend $\chi$ to a character $\chi'$ of $G'_{\mathfrak{b}}$ by setting $\chi(G''_{\mathfrak{b}}) = 1$, and write $B' := \textnormal{Ker}(\chi') = B \times G''_{\mathfrak{b}} \subseteq G'_{\mathfrak{b}}$, so that $B'$ has index $3$ in $G'_{\mathfrak{b}}$ and is uniquely determined by $\mathfrak{b}$ and $\chi$. By class field theory, there is a unique abelian extension $E_1/L$ for which the Artin map induces an isomorphism $G'_{\mathfrak{b}}/B' \simeq {\text {\rm Gal}}(E_1/L)$, and it must be cyclic cubic, since $G'_{\mathfrak{b}}/B'$ is. The uniqueness forces $E_1$ to be Galois over $\mathbb{Q}$, since the groups $G'_{\mathfrak{b}}$ and $B'$ are preserved by $\tau$ and $\tau_2$ and hence by all of ${\text {\rm Gal}}(L/\mathbb{Q})$. For each fixed $\mathfrak{b}$, we obtain a unique such field $E_1$ for each distinct pair of characters $\chi, \overline{\chi}$, but we may obtain the same field $E_1$ for different values of $\mathfrak{b}$.
We have ${\text {\rm Gal}}(E_1/\mathbb{Q}) \simeq S_3 \times C_2$: $\tau$ and $\tau_2 \in {\text {\rm Gal}}(E_1/L)$ both act nontrivially (elementwise) on $G_{\mathfrak{b}}$ and preserve $B$, and hence act nontrivially on $G_{\mathfrak{b}}/B$ and $G'_{\mathfrak{b}'}/B'$. Under the Artin map this implies that $\tau$ and $\tau_2$ both act nontrivially on ${\text {\rm Gal}}(E_1/L)$ by conjugation, so that $\tau \tau_2$ commutes with ${\text {\rm Gal}}(E_1/L)$. As $\tau \tau_2$ fixes $\mathbb{Q}(\sqrt{-3D})$, this implies that $E_1$ contains a cubic extension $E/\mathbb{Q}$ with quadratic resolvent $\mathbb{Q}(\sqrt{-3D})$, which is unique up to isomorphism. Any prime $p$ which splits in $\mathbb{Q}(\sqrt{-3D})$ must either be inert or totally split in $E$. \\ \\ We now prove that the fields $E$ which occur in this construction are those described by the proposition. Since the quadratic resolvent of any $E$ is $\mathbb{Q}(\sqrt{-3D})=\mathbb{Q}(\sqrt{D^*})$ with $D^*$ fundamental, we have $\textnormal{Disc}(E) = r^2D^* $ for some integer $r$ divisible only by $3$ and prime divisors of $D^*$.
No prime $\ell > 3$ can divide $r$, because $\ell^3$ cannot divide the discriminant of any cubic field. Similarly $2$ cannot divide $r$,
since if $2 | D$ then $4 | D$, but $16$ cannot divide the discriminant of a cubic field. Therefore $r$ must be a power of $3$. Since the $3$-adic valuation of a cubic field discriminant is never larger than $5$, $r$ must be $1$, $3$, or $9$, and by Proposition \ref{prop_disc_vals} we cannot have $r=3$ if $3\nmid D^*$, in other words if $3\mid D$. It follows that $E$ must have discriminant $D^*$, $-27D$, or $-243D$.
We further claim that $\textnormal{Disc}(E) \neq -243D$. To see this, we apply the formula \begin{equation}\label{eqn_disce} \textnormal{Disc}(E_1) = \pm \textnormal{Disc}(L)^3 {\mathcal N}_{L / \mathbb{Q}} ( \mathfrak{d}(E_1/L)) \end{equation} and bound $v_3(\textnormal{Disc}(E_1))$. We see that $v_3(\textnormal{Disc}(L)) = 2$ (by direct computation, or by a formula similar to \eqref{eqn_disce}). Moreover, the conductor of $E_1/L$ divides $(3 \sqrt{-3})$, and therefore $\mathfrak{d}(E_1/L)$ divides $(27)$. The norm of this ideal is $3^{12}$, and putting all of this together we see that $v_3(\textnormal{Disc}(E_1)) \leq 18$.
We also have \begin{equation}\label{eqn_disce2} \textnormal{Disc}(E_1) = \pm \textnormal{Disc}(E)^4 {\mathcal N}_{E / \mathbb{Q}} ( \mathfrak{d}(E_1/E)), \end{equation} so that $v_3(\textnormal{Disc}(E)) < \frac{18}{4}$, and in particular $v_3(\textnormal{Disc}(E)) \neq 5$ as desired.
Therefore, in all cases $E$ must have discriminant $D^*$ or $-27D$. Moreover, similar comparisons of \eqref{eqn_disce} and \eqref{eqn_disce2} show that if
$\mathfrak{b} \in \{(1), (\sqrt{-3}) \}$, or if $\mathfrak{b} = (3)$ and $3 | D$, then $E$ must have discriminant $D^*$.
We have therefore associated a unique $E$ to each pair $(\chi, \overline{\chi})$ as in the proposition, and it follows from Propositions \ref{prop_count_cf} and \ref{prop_g_size} that we obtain all such $E$ in this manner. \\ \\ Finally, we prove the second part of the proposition. The statements concerning $\omega_{\chi}(p)$ and $\chi(p\mathfrak{c})$ follow from Lemma \ref{lemomchi}. We show the equivalence of the first and second statement.
We extend $\chi$ to a character $\chi'$ of $G'_{\mathfrak{b}}$ as defined previously, so that $\chi'(p)$ is defined and equal to 1. Therefore, $\chi(p \mathfrak{c}) = 1$ if and only if $\chi'(\mathfrak{c}) = 1$, and we must show that this is true if and only if $p$ splits completely in $E$.
Suppose first that $\mathfrak{c}$ is prime in $\mathbb{Z}_L$. By class field theory, $\chi'(\mathfrak{c}) = 1$ if and only if $\mathfrak{c}$ splits completely in $E_1/L$, which happens if and only if $(p)$ splits into six ideals in $E_1$, which happens if and only if $p$ splits completely in $E$.
Suppose now that $\mathfrak{c} = \mathfrak{p} \tau \tau_2(\mathfrak{p})$ in $L$. Since $\mathfrak{p}$ and $\tau \tau_2(\mathfrak{p})$ represent the same element of $G'_{\mathfrak{b}}/B'$ they have the same Frobenius element in $E_1/L$, hence since $\chi'$ is a cubic character it follows that $\chi'(\mathfrak{c})=1$ if and only if $\chi'(\mathfrak{p})=1$. By class field theory this is true if and only if $\mathfrak{p}$ splits completely in $E_1/L$, in which case $(p)$ splits into twelve ideals in $E_1$; for this it is necessary and sufficient that $p$ split completely in $E$.\end{proof}
\subsection{Putting it all Together} Applying Proposition \ref{prop_cubic_bij} we may regard the formula of Theorem \ref{theorem61} as a sum over cubic fields. We now divide into cases $3 \nmid D$ and $D \equiv 3, 6 \ ({\text {\rm mod}} \ 9)$. The main terms, corresponding to the trivial characters of $G_{\mathfrak{b}}$, contribute \begin{equation} \dfrac{3}{2c_D}\sum_{\mathfrak{b}\in\mathcal{B}}\omega_1(3)A_{\mathfrak{b}}(s)=\dfrac{1}{2c_D}M_1(s)\prod_{\leg{-3D}{p}=1}\left(1+\dfrac{2}{p^s}\right) \end{equation} of Equation \ref{eqn_main_cubic}. These terms have also been given in \cite{CM} and \cite{M}.
It remains to handle the contribution of the nontrivial characters.
Assume first that $3 \nmid D$. In this case by Theorem \ref{theorem61} \begin{multline} c_D\Phi_D(s)=\frac{3}{2} \cdot \Bigg[ 3^{-2s}\sum_{\chi \in \widehat{G_{(1)}}} F((1), \chi, s) - 3^{-2s-1} \sum_{\chi \in \widehat{G_{(3)}}} F((3), \chi, s)\\ + \frac{1}{3}\sum_{\chi \in \widehat{G_{(3 \sqrt{-3})}}} F((3\sqrt{-3}), \chi, s)) \Bigg]\;, \end{multline} and the calculations above show that \begin{equation} F(\mathfrak{b}, \chi, s) = \prod_{ \big( \frac{-3D}{p} \big) = 1} \bigg(1 + \frac{\omega_{E}(p)}{p^s} \bigg)\;, \end{equation} where $E$ is the cubic field associated to $\chi$. Each field $E$ of discriminant $D^*$ contributes twice (each character yields the same field as its inverse) to each of the three sums above, and each field of discriminant $-27D$ contributes twice to each of the last two. We obtain a contribution of $1 + 2 \cdot 9^{-s}$ for each field of discriminant $D^* = -3D$, and of $1 - 9^{-s}$ for each field of discriminant $-27D$. This is the assertion of the theorem.
Assume now that $D \equiv 3 \ ({\text {\rm mod}} \ 9)$. Then \begin{multline} c_D\Phi_D(s)=\frac{3}{2} \cdot \Bigg[ 3^{-3s/2}\sum_{\chi \in \widehat{G_{(\sqrt{-3})}}} F((\sqrt{-3}), \chi, s) + \big( 3^{-s} - 3^{-3s/2} \big) \sum_{\chi \in \widehat{G_{(3)}}} F((3), \chi, s)\\
+ \frac{1}{3} \big(1 - 3^{-s} \big) \sum_{\chi \in \widehat{G_{(3 \sqrt{-3})}}} F((3\sqrt{-3}), \chi, s)) \Bigg]\;. \end{multline} The first two sums are over fields of discriminant $D^* = -D/3$, and the last sum also includes fields of discriminant $-27D$. We obtain a contribution of $1 + 2 \cdot 3^{-s}$ for each field of discriminant $D^*$, and of $1 - 3^{-s}$ for each field of discriminant $-27D$, in accordance with the theorem.
Finally, assume that that $D \equiv 6 \ ({\text {\rm mod}} \ 9)$. Then \begin{multline} c_D\Phi_D(s)=\frac{3}{2} \cdot \Bigg[ 3^{-2s} \sum_{\chi \in \widehat{G_{(1)}}} \omega_{\chi}(3) F((1), \chi, s) + 3^{-3s/2}\sum_{\chi \in \widehat{G_{(\sqrt{-3})}}} F((\sqrt{-3}), \chi, s) + \\ \big( 3^{-s} - 3^{-3s/2} \big) \sum_{\chi \in \widehat{G_{(3)}}} F((3), \chi, s) + \frac{1}{3} \big(1 - 3^{-s} \big) \sum_{\chi \in \widehat{G_{(3 \sqrt{-3})}}} F((3\sqrt{-3}), \chi, s)) \Bigg]. \end{multline}
The first three sums are over fields of discriminant $D^* = -D/3$, and the last sum also includes fields of discriminant $-27D$. For the same reasons as discussed for $p \neq 3$ we have $\omega_{\chi}(3)=\omega_{E}(3)$, where $E$ is the cubic field associated to $\chi$.
We thus obtain a contribution of $1 + 2 \cdot 3^{-s} + 3 \omega_{E}(3) \cdot 3^{-2s}$ for each field of discriminant $D^*$, and of $1 - 3^{-s}$ for each field of discriminant $-27D$, in accordance with the theorem.
\section{Numerical Examples}\label{sec_examples}
We present some numerical examples of our main results.
Suppose first that $D < 0$.
If $({\text {\rm rk}}_3(D),{\text {\rm rk}}_3(D^*))=(0,0)$, then there are no cubic fields of discriminant $D^*$ or $-27D$. \begin{equation} \Phi_{-4}(s)=\dfrac{1}{2}\left(1+\dfrac{2}{3^{2s}}\right)\prod_{\leg{12}{p}=1}\left(1+\dfrac{2}{p^s}\right). \end{equation}
If $({\text {\rm rk}}_3(D),{\text {\rm rk}}_3(D^*))=(1,0)$, there are no cubic fields of discriminant $D^*$ and a unique cubic field of discriminant $-27D$.
\begin{equation} \Phi_{-255}(s) =\dfrac{1}{2}\left(1+\dfrac{2}{3^s}+\dfrac{6}{3^{2s}}\right)\prod_{\leg{6885}{p}=1}\left(1+\dfrac{2}{p^s}\right)\\ +\left(1-\dfrac{1}{3^s}\right)\prod_p\left(1+\dfrac{\omega_{L6885}(p)}{p^s}\right)\;,\end{equation} where $L6885$ is the cubic field determined by $x^3 - 12x - 1 = 0$.
If $({\text {\rm rk}}_3(D),{\text {\rm rk}}_3(D^*))=(1,1)$, there is a unique cubic field of discriminant $D^*$ and no cubic fields of discriminant $-27D$.
\begin{equation} \Phi_{-107}(s)=\dfrac{1}{2}\left(1+\dfrac{2}{3^{2s}}\right)\prod_{\leg{321}{p}=1}\left(1+\dfrac{2}{p^s}\right)\\ +\left(1+\dfrac{2}{3^{2s}}\right)\prod_p\left(1+\dfrac{\omega_{L321}(p)}{p^s}\right), \end{equation} where $L321$ is the field determined by $x^3-x^2-4x+1$.
If $({\text {\rm rk}}_3(D),{\text {\rm rk}}_3(D^*))=(2,1)$, there is a unique cubic field of discriminant $D^*$ and three cubic fields of discriminant $-27D$.
\begin{align*} \Phi_{-8751}(s)=&\dfrac{1}{2}\left(1+\dfrac{2}{3^s}+\dfrac{6}{3^{2s}}\right)\prod_{\leg{26253}{p}=1}\left(1+\dfrac{2}{p^s}\right)\\ & +\left(1+\dfrac{2}{3^s}-\dfrac{3}{3^{2s}}\right)\prod_{p\ne3}\left(1+\dfrac{\omega_{L2917}(p)}{p^s}\right) +\left(1-\dfrac{1}{3^s}\right)\sum_{1\le i\le 3}\prod_p\left(1+\dfrac{\omega_{L236277_i}(p)}{p^s}\right), \end{align*} where the four fields above are defined as follows:
\centerline{
\begin{tabular}{|c||c|c|} \hline Cubic field & Discriminant & Defining polynomial\\ \hline\hline \hline $L2917$&$8751/3$&$x^3-x^2-13x+20$\\ \hline $L236277_1$&$3^3\cdot8751$&$x^3-138x+413$\\ \hline $L236277_2$&$3^3\cdot8751$&$x^3-129x-532$\\ \hline $L236277_3$&$3^3\cdot8751$&$x^3-90x-171$\\ \hline \end{tabular}}
If $({\text {\rm rk}}_3(D),{\text {\rm rk}}_3(D^*))=(2,2)$, there are four cubic fields of discriminant $D^*$ and none of discriminant $-27D$. Recall from Proposition \ref{case22} above that if $D\equiv6\pmod9$, $3$ is totally split in one of them and inert in the other three, so one of the cubic fields of discriminant $D^*$, which we include first, is distinguished by the fact that $3$ is totally split.
\begin{equation} \Phi_{-34603}(s) = \dfrac{1}{2}\left(1+\dfrac{2}{3^{2s}}\right)\prod_{\leg{103809}{p}=1}\left(1+\dfrac{2}{p^s}\right) +\left(1+\dfrac{2}{3^{2s}}\right)\sum_{1\le i\le 4}\prod_p\left(1+\dfrac{\omega_{L103809_i}(p)}{p^s}\right); \end{equation} \centerline{
\begin{tabular}{|c||c|c|} \hline Cubic field & Discriminant & Defining polynomial\\ \hline\hline $L103809_1$&$3\cdot34603$&$x^3-x^2-84x+261$\\ \hline $L103809_2$&$3\cdot34603$&$x^3-x^2-64x+91$\\ \hline $L103809_3$&$3\cdot34603$&$x^3-x^2-92x-204$\\ \hline $L103809_4$&$3\cdot34603$&$x^3-x^2-62x-15$\\ \hline \end{tabular}}
The case $D > 1$ is very similar, so we will only give one example. If (for example) $({\text {\rm rk}}_3(D),{\text {\rm rk}}_3(D^*))=(1,1)$ there is a unique cubic field of discriminant $D^*$ and three cubic fields of discriminant $-27D$.
\begin{align*} 3\Phi_{321}(s)&=\dfrac{1}{2}\left(1+\dfrac{2}{3^s}+\dfrac{6}{3^{2s}}\right)\prod_{\leg{-963}{p}=1}\left(1+\dfrac{2}{p^s}\right)\\ &\phantom{=}+\left(1+\dfrac{2}{3^s}-\dfrac{3}{3^{2s}}\right)\prod_{p\ne3}\left(1+\dfrac{\omega_{LM107}(p)}{p^s}\right) +\left(1-\dfrac{1}{3^s}\right)\sum_{1\le i\le 3}\prod_p\left(1+\dfrac{\omega_{LM8667_i}}{p^s}\right), \end{align*} where the indicated cubic fields are given as follows:
\centerline{
\begin{tabular}{|c||c|c|} \hline Cubic field & Discriminant & Defining polynomial\\ \hline\hline $LM107$&$-321/3$&$x^3-x^2+3x-2$\\ \hline $LM8667_1$&$-3^3\cdot321$&$x^3+18x-45$\\ \hline $LM8667_2$&$-3^3\cdot321$&$x^3+6x-17$\\ \hline $LM8667_3$&$-3^3\cdot321$&$x^3+15x-28$\\ \hline \end{tabular}}
\section{Counting $S_3$-sextic fields of bounded discriminant\protect\footnote{We thank the anonymous referee of \cite{TT6} for (somewhat indirectly) suggesting this application.}}\label{sec_computations}
Theorem \ref{thm_main_cubic} naturally lends itself to counting $S_3$-sextic fields, that is, fields which are Galois over $\mathbb{Q}$ with Galois group $S_3$. For any such $\widetilde{K}$ with cubic and quadratic subfields $K$ and $k$ respectively, we have the formula \begin{equation} \textnormal{Disc}(\widetilde{K}) = \textnormal{Disc}(K)^2 \textnormal{Disc}(k) = \textnormal{Disc}(k)^3 f(K)^4. \end{equation} Let $N^{\pm}(X; S_3)$ denote the number of $S_3$-sextic fields $\widetilde{K}$ with $0 < \pm \textnormal{Disc}(\widetilde{K}) < X$. Then Theorem \ref{thm_main_cubic} may be used to compute $N^{\pm}(X; S_3)$: iterate over fundamental discriminants $D$ with $0 < \pm D < X^{1/3}$; compute the Dirichlet series $\Phi_D(s)$ to $f(K) < (X/D^3)^{1/4}$ and evaluate its partial sums; finally, sum the results.
We implemented this algorithm in PARI/GP \cite{pari}, which can easily handle the various quantities occurring in \eqref{eqn_main_cubic}. For a list of cubic fields we relied on Belabas's \url{cubic} program \cite{Bel}.
We used the GP calculator, which has the advantage of simplicity. The disadvantage of this approach is that we were obliged to read the output of \url{cubic} from disk, limiting us by available disk space. One could probably compute $N^{\pm}(X; S_3)$ to at least $X = 10^{27}$ by directly implementing \eqref{eqn_main_cubic} within Belabas's code; alternatively, Belabas has informed us that an implementation of \url{cubic} within PARI/GP may be forthcoming. In any case we leave further computations for later.
This approach dictated a slight variant of the algorithm described above: \begin{itemize} \item We parsed Belabas's output into files readable by PARI/GP, using a Java program written for this purpose. \item Given a table of all cubic fields $K$ with $0 < \pm \textnormal{Disc}(K) < Y$ for some $Y$, it must contain all fields in ${\mathcal L}_3(D)$ with $0 < \mp \textnormal{Disc}(K) < Y/27$, allowing us to choose any $X \leq 3^{-9} Y^3$. \item Processing each cubic field in turn, and ignoring those not in $\mathcal{L}_3(D)$ for some fundamental discriminant $D$
with $|D| \leq X^{1/3}$, we computed the associated Dirichlet series to a length of $\lfloor (X/D^3)^{1/4} \rfloor$, and its partial sum (less the $\frac{1}{2}$ term for $f(K) = 1$), and maintained a running total of the results. \item Finally, we added the main term of \eqref{eqn_main_cubic} for each $D$ with $0 < \pm D < X^{1/3}$. \end{itemize} Our algorithm would also allow for efficient computation of the $\Phi_D(s)$, given a {\itshape sorted} version of Belabas's output.
The implementation posed no particular difficulties, and our PARI/GP source code is available on the second author's website.\footnote{To replicate our data for large $X$ one must also install and run Belabas's \url{cubic} program, and as well as our parser. With our source code we have also made available a modestly sized table of cubic fields, with which our PARI/GP program suffices alone to replicate our data for smaller values of $X$.} On a 2.1 GHz MacBook our computation took approximately 3 hours for negative discriminants $< 3 \cdot 10^{23}$ and 10 hours for positive discriminants $< 10^{23}$; it is to be expected from the shape of \eqref{eqn_main_cubic} that counts of negative discriminants may be computed more efficiently than positive, even though there are more of them.
This brings us to our data: \\ \\ \begin{center}
\begin{tabular}{c | c | cccc} $X$ & $N_6^+(X; S_3)$ \\ \hline $10^{12}$ & 690\\ $10^{13}$ & 1650 \\ $10^{14}$& 3848 \\ $10^{15}$& 8867 \\ $10^{16}$& 20062 \\ $10^{17}$& 45054 \\ $10^{18}$& 100335 \\ $10^{19}$& 222939 \\ $10^{20}$& 492335 \\ $10^{21}$& 1083761 \\ $10^{22}$& 2378358 \\ $10^{23}$& 5207310 \\ - & - \\
\end{tabular} \ \ \ \ \ \ \ \
\begin{tabular}{c | c | ccc} $X$ &$N_6^-(X; S_3)$\\ \hline $10^{12}$ & 2809\\ $10^{13}$ & 6315\\ $10^{14}$& 14121\\ $10^{15}$& 31276\\ $10^{16}$& 68972\\ $10^{17}$& 151877\\ $10^{18}$& 333398\\ $10^{19}$& 729572\\ $10^{20}$& 1592941\\ $10^{21}$& 3470007\\ $10^{22}$& 7550171\\ $10^{23}$& 16399890\\ $3 \cdot 10^{23}$& 23738460\\
\end{tabular} \end{center} \vskip 0.2in This data may be compared to known theoretical results on $N^{\pm}(X; S_3)$. It was proved by Belabas-Fouvry \cite{BF} and Bhargava-Wood \cite{BW} (independently) $N^{\pm}(X; S_3) \sim B^{\pm} X^{1/3}$ for explicit constants $B^{\pm}$ (with $B^- = 3 B^+$), and in \cite{TT6} Taniguchi and the second author obtained a power saving error term.
The authors of \cite{TT6} also computed tables of $N^{\pm}(X; S_3)$ up to $X = 10^{18}$ using a different method, allowing us to double-check our work here. Based on this data and on \cite{R, BST, TT}, they guessed the existence of a secondary term of order $X^{5/18}$, and found that the data further suggested the existence of additional, unexplained lower order terms. For more on this we defer to \cite{TT6}.
\end{document} | arXiv |
Review | Open | Published: 22 March 2016
Hyperoxia toxicity after cardiac arrest: What is the evidence?
Jean-François Llitjos1,2,
Jean-Paul Mira1,2,
Jacques Duranteau3,4 &
Alain Cariou1,2
Annals of Intensive Carevolume 6, Article number: 23 (2016) | Download Citation
This review gives an overview of current knowledge on hyperoxia pathophysiology and examines experimental and human evidence of hyperoxia effects after cardiac arrest. Oxygen plays a pivotal role in critical care management as a lifesaving therapy through the compensation of the imbalance between oxygen requirements and supply. However, growing evidence sustains the hypothesis of reactive oxygen species overproduction-mediated toxicity during hyperoxia, thus exacerbating organ failure by various oxidative cellular injuries. In the cardiac arrest context, evidence of hyperoxia effects on outcome is fairly conflicting. Although prospective data are lacking, retrospective studies and meta-analysis suggest that hyperoxia could be associated with an increased mortality. However, data originate from retrospective, heterogeneous and inconsistent studies presenting various biases that are detailed in this review. Therefore, after an original and detailed analysis of all experimental and clinical studies, we herein provide new ideas and concepts that could participate to improve knowledge on oxygen toxicity and help in developing further prospective controlled randomized trials on this topic. Up to now, the strategy recommended by international guidelines on cardiac arrest (i.e., targeting an oxyhemoglobin saturation of 94–98 %) should be applied in order to avoid deleterious hypoxia and potent hyperoxia.
Oxygen has a pivotal role in medicine as a lifesaving therapy in many emergency situations. In order to avoid hypoxia-related mortality and morbidity, oxygen is delivered in acute care situations in a liberal way, even if hypoxia is not confirmed. However, as every medication, experimental and clinical studies have highlighted some physiological potent side effects of high oxygen tension that could worsen outcome [1]. Cardiac arrest is the archetypal situation given the imperious need of rapid oxygen delivery in organs. Nevertheless, this global ischemia–reperfusion syndrome produces high amounts of reactive oxygen species that could be significantly increased by high oxygen tension. Thus, hyperoxia in the post-resuscitation context of cardiac arrest is an important topic. Despite several experimental and clinical studies, this subject remains at the center of conflicting results with insufficient body of evidences. We screened PubMed, Embase and Cochrane databases using the following keywords with various combinations: "cardiac arrest," "oxygen," "oxidative stress" and "hyperoxia." Pediatric data concerning oxygen management during and after cardiac arrest were neither listed nor analyzed given differences in etiologies, management and outcome between adult and pediatric patients resuscitated from cardiac arrest. We herein provide an overview of the present knowledge of the pathophysiological effects of oxygen, review experimental and clinical studies and develop some concepts that could be beneficial for further studies.
Pathophysiology of oxygen and hyperoxia
Hyperoxia occurs when the partial pressure of intraalveolar oxygen exceeds normal breathing conditions, thus leading to hyperoxemia, which is defined by an increased arterial O2 partial pressure. Oxygen concentration in the blood is a combination of 3 main parameters and the sum of the dissolved oxygen and the hemoglobin-bound oxygen, defined by the following equation:
$$\begin{aligned} {\text{OBC}} & = \left[ {{\text{Hemoglobin}} - {\text{bound}}\;{\text{oxygen}}} \right] \\&\quad + \left[ {{\text{Dissolved}}\;{\text{oxygen}}} \right] \\ {\text{OBC}} & = \left( {1.34 \times \left[ {\text{Hb}} \right] \times \left[ {{\text{SaO}}_{2} } \right]} \right) + \left( {{\text{Kh}} \times \left[ {{\text{PaO}}_{2} } \right]} \right) \\ \end{aligned}$$
where OBC: oxygen blood concentration (mL of oxygen per liter of blood); 1.34: oxygen-carrying capacity of hemoglobin (mL of oxygen per gram of hemoglobin), [Hb]: hemoglobin, [SaO2]: hemoglobin oxygen saturation, [Kh]: solubility coefficient of oxygen, [PaO2]: arterial oxygen partial pressure.
At normal pH and normal temperature, increased O2 breathing raises the amount of dissolved O2 without modifying the close to 100 % hemoglobin saturation given the sigmoid-shaped hemoglobin to oxygen dissociation curve (Fig. 1). If the Henry's law states a linear relation between arterial oxygen partial pressure and oxygen solubility, temperature is a main parameter influencing the solubility coefficient parameter Kh. For instance, using the work by Battino et al. [2] and the Van't Hoff equation, Kh (37 °C) = 0.0031 whereas Kh (33 °C) = 0.0084. Therefore, hypothermia increases amounts of oxygen dissolved in the blood (Fig. 1). Moreover, several factors affect the dissociation curve of oxyhemoglobin such as temperature, pH (Borr effect), PaCO2 (Haldane effect) and 2,3-biphosphoglyceric acid [3]. A left shift of the curve that means a higher oxygen affinity of hemoglobin is induced by hypothermia, hypocarbia and alkalosis. This arterial accumulation of dissolved oxygen is supposed to exert deleterious effects through various mechanisms that are intricately linked: reactive oxygen species (ROS) overproduction, pulmonary toxicity, cardiac and neurological affects.
Hypothermia increases quantity of dissolved oxygen in blood. (a) + (b) The gray area under the curve represents amounts of hemoglobin-bound oxygen, and the black area under the curve represents quantity of dissolved oxygen. If a 33 °C temperature is associated with a leftward shift of the oxyhemoglobin curve when compared to a 37 °C central temperature, hypothermia (b) increases dissolved oxygen quantity in blood. For instance, there is a 2.7-fold increase in dissolved quantity of oxygen between 33 and 37 °C
ROS are unstable and highly reactive molecules, participating in a broad spectrum of cellular events such as production of inflammation mediators, intracellular messengers or anti-infectious effectors. In mammalian, ROS can be of endogenous or exogenous origin (radiation, pollution, drugs, medication, smoking). Under physiological condition, ROS are produced by the respiratory chain in mitochondria or by enzymatic reactions.
Toxicity of ROS consists essentially in lipid peroxidation, protein oxidation and DNA damages. Lipid peroxidation, when affecting intra- or extracellular membranes, leads to enzyme inactivation, thiols oxidation and mitochondrial respiratory chain inhibition [4]. Protein oxidation confers resistance to proteolysis, mostly by aggregation [5]. Toxicity of ROS with regard to DNA is dominated by cellular cycle modification, apoptosis and carcinogenesis [6].
The defenses developed to minimize, prevent and repair injuries caused by oxidative stress include enzymatic ROS removal (superoxide dismutase, catalase and glutathione peroxidase) and the non-enzymatic quenching of ROS by antioxidant (glutathione, albumin, A and E vitamin and thiols) [7]. Mechanisms involved in cell death induced by high oxygen tension include apoptosis, necrosis or mixed-mechanisms phenotype, depending on the cell type investigated. As oxygen is one of the main modulating factors of ROS production, hyperoxia appears to be a major provider of ROS overproduction, particularly in the inflammatory context of cardiac arrest, thus leading to an imbalance in the oxidative status.
Hyperoxia is known to exert toxic pulmonary effects through pulmonary gas exchange impairment or direct pulmonary toxicity. The alteration in gas exchanges is driven by the inhibition of hypoxic pulmonary vasoconstriction and by the "adsorption atelectasis" that increases intrapulmonary right-to-left shunt. Direct pulmonary toxicity, so-called Lorrain–Smith effect, consists in ROS-related direct toxicity on alveolar capillary barrier and leads to lung passageways congestion and hemorrhagic pulmonary edema [8].
Hyperoxia decreases cardiac output due to both a drop of heart rate [9] and a rise in vascular resistance [10]. Supra-physiological oxygen tensions also alter the microperfusion through a decreased capillary perfusion [11] and the systemic perfusion. This hyperoxia-induced vasoconstriction may result from a fall in NO bioavailability [12], with a potential contribution of ROS [13].
Hyperoxia possibly has toxic effects on the central nervous system, the so-called Paul Bert effect, which could reach its climax with tonic–clonic seizures [14]. This deleterious effect is particularly reported to occur in supra-atmospheric pressure such as hyperbaric chambers or diving and possibly related to ROS formation [15].
Experimental evidences in the cardiac arrest context
Various models with significant disparities
Various evidences from animal studies sustain the rationale for brain lesions after hyperoxic resuscitation. These studies compare the administration of hyperoxic to lower concentration of oxygen following resuscitation on neurological, histological and neurochemical outcome (Table 1). However, these studies show significant disparities. First, experimental models use three different animals (i.e., dogs, rats and pigs) with various resuscitation protocols. Second, even if most of the cardiac arrests are induced by electrically induced ventricular fibrillation, diverse cardiac arrest models are used. For instance, the use of asphyxia by Lipinski et al. may strongly influence response to oxygen during and after cardiopulmonary resuscitation. Thus, given the differences in animal species and cardiac arrest models, Pilcher et al. recently performed a meta-analysis of several studies in order to evaluate effects of hyperoxia on neurological outcome after cardiac arrest [16]. Meta-analysis of six studies with 95 animals revealed that 100 % FiO2 is associated with worse neurological outcome with a standardized mean difference of −0.64 (95 % CI −1.06 to −0.22). However, this result should be considered with caution given the heterogeneity of models and the small size of the overall population.
Table 1 Experimental studies evaluating effects of high oxygen tensions in the cardiac arrest context
A body of evidence highlighting the role of oxidative stress
Within a broad spectrum of cellular events, impaired cerebral enzymes appear to play a pivotal role in ischemia–reperfusion brain injury by oxidative molecular mechanisms. Instead of producing amounts of useful aerobic energy metabolites, anaerobic glycolysis observed during ischemia reperfusion condition produces excessive lactate, thus decreasing ATP production. Vereczki et al. investigated the effects of normoxic resuscitation on loss of pyruvate dehydrogenase enzyme (which induces the decarboxylation of pyruvate into NADH and acetyl coenzyme A) and neuronal death using a dog model of electrical cardiac arrest. The post-cardiac arrest hyperoxic ventilation led to higher loss of pyruvate dehydrogenase enzyme and to an increase in hippocampal neuronal death [17]. To go further, Richards et al. examined the hypothesis that the initial hippocampal neuronal loss could be related to a preferential decrease in aerobic metabolism when compared to the cortex. They investigated the pyruvate dehydrogenase enzyme activity decrease under hyperoxic condition into an electrical ventricular fibrillation dog model of cardiac arrest. Using carbon isotypes inclusion and spectroscopy, they found that dogs resuscitated with high levels of oxygen (100 vs. 21–30 % of inspired oxygen) had a decrease in PDH activity [18, 19]. This study seems to suggest a relation between increased oxidative stress and hyperoxia resuscitation in cardiac arrest.
Relevance of experimental models
Most of all, experimental studies are not a good reflection of current standard of care in cardiopulmonary resuscitation. First, some animals are anesthetized and mechanically ventilated before cardiac arrest. Therefore, hyperoxia is sometimes induced prior to cardiac arrest and administration of high-inspired oxygen fraction has an impact on further analysis. Second, whereas temperature is nowadays widely admitted to influence oxygenation parameters, no study is performed under therapeutic hypothermia and central body temperature management details are scarce. Third, neurological examination is performed in a time frame ranging from 2 h to 5 days. Indeed, data are lacking on neurological long-term outcome.
Preclinical data available in animals are characterized by a large disparity in species, in arrest mechanisms and in resuscitation protocols. Furthermore, the discrepancy between recorded effects of hyperoxia on neurological outcome does not provide a clear answer. Mechanisms need to be elucidated, particularly the relationship between neurological damages following resuscitation and ROS overproduction within cerebral tissue under hyperoxic conditions. Finally, the clinical significance of these different experimental models studies is unclear.
Hyperoxia and cardiac arrest in humans
Based on the observational data from Norway that report a better outcome in centers that did not use hyperoxygenation during CPR [20], the hypothesis of the neurological detrimental effects of hyperoxia after cardiopulmonary resuscitation was first investigated prospectively in a small study including 28 patients, randomized to be ventilated with either 30 or 100 % inspired oxygen after out-of-hospital cardiac arrest. Using neuron-specific enolase (NSE) 24 h after resuscitation as a specific marker of neuronal injury, the authors found a statistically significant increase in this enzyme within a subgroup of patients not treated with therapeutic hypothermia, thus suggesting both neurological deleterious effects of hyperoxia and putative beneficial effects of hypothermia on hyperoxia damages [21]. Regrettably, this trial does not incorporate enough patients to appraise neurological outcome or survival. Moreover, a recent randomized controlled feasibility trial failed to safely titrate oxygen in the pre-hospital period [22]. Therefore, most of the data are provided by retrospective studies (Table 2).
Table 2 Human studies evaluating effects of hyperoxia in the cardiac arrest context
Two main retrospective studies with conflicting results
Two recently published articles mainly dominate retrospective studies in humans. First, Kilgannon et al. reported a retrospective cohort of patients extracted from an American database consolidating 120 centers (Project IMPACT) [23]. In multivariate analysis, exposition to hyperoxia on mortality is associated with an odds ratio of 1.8 (95 % CI 1.5–2.2). On the other hand, Bellomo et al. reported a retrospective cohort of patients extracted from an Australian and New Zealand database clustering 125 centers [24]. In multivariate analysis, exposition to hyperoxia was associated with an odds ratio of 1.2 (95 % CI 1.1–1.6), but no longer using a Cox proportional hazard model and when adjusted on FiO2. Major differences in the definition and in the analysis of hyperoxia may explain these conflicting results.
An important variability in hyperoxia definition
In order to ensure comparison after the work by Kilgannon et al., most of the studies define hyperoxia as a PaO2 higher or equal to 300 mmHg. This definition is based a priori and arbitrarily on an experimental study evaluating the effect of hypoxemic reperfusion on brain histopathological changes in the pig [25]. There is no physiological evidence in humans to support the use of this cutoff value, and the use of a single value may under or overestimate hyperoxia incidence. Moreover, a definition of groups using PaO2 presupposes a threshold effect of oxygen and eliminates a dose-dependant effect. This point has been investigated in three studies. Analyzing PaO2 as a continuous variable, Kilgannon et al. and Janz et al. found in multivariate analysis that high levels of PaO2 were associated with an increased mortality (OR 1.69; 95 % CI 1.56–2.07 and OR 1.4; 95 % CI 1.02–2.01, respectively) [26, 27]. Interestingly, these two studies had similar overall mortality (54 and 55 %, respectively). The third study found no association between PaO2 levels of exposure and the neurological status at 12 months as the main outcome measure [28]. These results support the hypothesis of a dose-dependant association between supra-normal oxygen tension and outcome. However, various precisions are lacking in the work done by Kilgannon et al., for instance cardiac arrest precisions (according to Utstein style) like the cause of death, neurological outcome or ventilation parameters, and thus may participate to overestimation of oxygen role in mortality.
In addition, the value of PaO2 of interest is not clearly defined in the literature. In the first study published by Kilgannon et al., hyperoxia was defined according to the first blood gas analysis available within the first 24 h after intensive care unit admission; some studies analyze the highest PaO2 in the first 24 h [26, 27, 29], whereas in the studies reported by Bellomo et al. and Ihle et al. hyperoxia was defined using the "worst" blood gas analysis within the first 24 h [24, 30]. These variations seem to affect hyperoxia prevalence, which is, for instance, ranging in all studies from 2.7 to 41 % and may thus introduce bias for further analysis on mortality.
Influence of hypothermia on oxygen status during post-cardiac arrest resuscitation
It is now well established that temperature mainly influences ventilation and oxygenation parameters in patients. Hypothermia induces a right shift of the oxygen dissociation curve, decreases oxygen amounts released and increases carbon dioxide solubility [31]. Precisions on temperature are lacking in some studies [30, 32], whereas various proportions of patients are treated with therapeutic hypothermia in others (even if the induced or spontaneous nature of this hypothermia is not clearly indicated) ranging from 6 % [23] to 80 % [33]. Moreover, methodological details regarding temperature corrections of arterial blood gas are missing in most of the studies. To address this concern, two studies investigated effects of oxygenation status on mortality under hypothermia conditions. Once again, analysis of PaO2 differs between these two studies and thus limits comparison: Janz et al. use the highest PaO2 [27], whereas Lee et al. use the mean PaO2 using 8 arterial blood gas from ROSC to rewarming [34]. Therefore, using the previously reported cutoff value of 300 mmHg, 1.1 % of patients presented hyperthermia in the study by Lee et al. while about 30 % of patients were in the hyperoxia group in the study by Janz et al. This issue may have a major impact on overall mortality in most of the studies given the various proportions of patients under hypothermia. For instance, difference in mortality reported by Kilgannon et al. may be overestimated given the 6 % of patients with a central temperature under 34 °C within the first 24 h when compared to the 66 % of patients under the same condition in the work by Elmer et al. [35].
Oxygen exposition after cardiac arrest: the crucial period
Another problem of interest is the time point of hyperoxia exposure, which can lead to misclassification of patients. Assuming the deleterious effect of exposition to supra-normal oxygen tension after cardiac arrest, precise time point when oxygen may have a detrimental effect remains unclear. Experimental evidence supports the assumption that early hyperoxia should be more pernicious than late hyperoxia [36] and that oxidant injury occurs rapidly after cardiac arrest [37]. However, results are different in humans. Indeed, one study evaluated the impact of PaO2 levels during cardiopulmonary resuscitation on cerebral performance status after hospital admission. In this study, 28 % of patients survived with cerebral performance category CPC 1 or 2 in the hyperoxia group (PaO2 during CPR >300 mmHg) whereas 23 and 14 % of patients survived with a CPC 1 or 2 in the normoxia and hypoxia group, respectively [32]. Nevertheless, these interesting results should be taken cautiously given the higher rate of hospital admission in the hyperoxia group (83.3 vs. 50.6 % in the normoxia group and 18.8 % in the hypoxia group) that could indicate a better initial prognosis of hyperoxic patients.
Recently, three meta-analyses pooled observational studies studying the relationship between hyperoxia and outcomes in post-cardiac arrest [38–40]. Hyperoxia appears to be correlated with increased intrahospital mortality. However, these results must be interpreted cautiously given the heterogeneity and the limited sample size in analyzed studies. Moreover, as highlighted by the authors, they reconstructed OR when not provided and some patients are overlapped within the population studied by Bellomo et al. [24] and Ihle et al. [30] using the ANZICS database.
How to think out of the box?
Given their short life, ROS cellular damages require the postulate of an increased oxygen tension within the damaged tissue. However, arterial PaO2 is possibly not an accurate estimate of tissular oxygen delivery, particularly when cerebral blood flow is decreased. Using a swine model of cardiac arrest, Rossi et al. found a negligible improvement of cerebral oxygen consumption under hyperoxia condition during cerebral blood flow reduction mimicking CPR [41]. Therefore, a regional cerebral tissue oxygenation monitoring before and after ROSC could improve the comprehension of hyperoxia mechanisms.
Most of the clinical studies focus on intrahospital mortality or on cerebral performance status as the main outcome measure for hyperoxia exposure after cardiac arrest and hypothesize an increased ROS production-mediated process. However, outcome may be related to other mechanisms. For instance, although a causal role has never been highlighted, hyperoxia is recognized to induce pulmonary dysfunction and may affect patient outcome after cardiac arrest. Only one recent study investigated whether high levels of oxygen exposure after cardiac arrest contribute to pulmonary dysfunction. They found no relation between pulmonary compliance and higher exposure to oxygen in the first hours [42]. Nevertheless, whether hyperoxia may participate to ventilator-acquired pneumonia occurrence or could increase cardiac dysfunction remains unknown and should be investigated in further studies.
Even if oxidative stress is a well experimentally documented mechanism of toxicity in ischemia–reperfusion, evidence in the cardiac arrest context is scarce. Only one study reports decreased amounts of derivatives of reactive oxygen metabolites but also a decreased global antioxidant capacity of blood plasma after cardiac arrest under therapeutic hypothermia [43]. Unfortunately, baseline oxidative status is not compared to control population and therefore cannot be evaluated. Cardiac arrest is strongly supposed to induce ROS overproduction, but effects on antioxidant defenses are poorly documented, so as the levels of ROS production when compared to other populations of patients such as cerebral trauma or cerebral ischemia. Considerations on these aspects should be part of the analysis on hyperoxia toxicity after cardiac arrest and could therefore lead to reconsider and revisit the current concept of oxygen toxicity after cardiac arrest.
Although PaO2 provided by arterial blood gas is the only measurement of oxygen concentration in blood, dissolved oxygen is only responsible for ROS production. Dissolved oxygen quantification should therefore be the main parameter assessed in studies. Moreover, estimation of dissolved oxygen in blood should take into account temperature-related variation. For instance, decreasing central temperature from 37 to 33 °C is associated with a 2.7-fold increase in dissolved oxygen in the blood, which might increase oxidative damages.
Cardiac arrest is a cataclysmic form of global ischemia–reperfusion affecting all organs, leading to reconsider the direct toxicity of hyperoxia on neurological outcome and mortality, considering ROS-mediated injuries observed in animal models. Many questions are still unsolved: Assuming an imbalance in oxidative stress equilibrium, is there a decrease in antioxidative defenses, an increase in ROS amounts or both? What is the role of nitrogen species? Does dissolved oxygen play a significant role?
Variability in primary outcome in human studies highlights the difficulties encountered by authors in examining outcome (Table 3). It is important to note that none of the animal studies establish any repercussion of hyperoxia on survival. Given the suspected pathophysiology of hyperoxia toxicity and the evidence from animal experimental studies, this problem emphasizes the importance of focusing also on neurological damages induced by high oxygen tension. We suggest that a well-designed prospective study should assess whether receiving increasing values of arterial oxygen partial pressure (using temperature-corrected oxygen dissolved in blood as a continuous variable), as soon as possible after ROSC, is or not associated with a difference in oxidative stress imbalance and outcome.
Table 3 Ongoing studies related to hyperoxia in the cardiac arrest context
Sjöberg F, Singer M. The medical use of oxygen: a time for critical reappraisal. J Intern Med. 2013;274(6):505–28.
Battino R, Rettich T, Tominaga T. The solubility of oxygen and ozone in liquids. J Phys Chem. 1982;12(2):163–78.
Severinghaus JW. Simple, accurate equations for human blood O2 dissociation computations. J Appl Physiol. 1979;46(3):599–602.
Niki E. Lipid peroxidation: physiological levels and dual biological effects. Free Radic Biol Med. 2009;47(5):469–84.
Berlett BS, Stadtman ER. Protein oxidation in aging, disease, and oxidative stress. J Biol Chem. 1997;272(33):20313–6.
Nathan C, Cunningham-Bussel A. Beyond oxidative stress: an immunologist's guide to reactive oxygen species. Nat Rev Immunol. 2013;13(5):349–61.
Finkel T. Signal transduction by reactive oxygen species. J Cell Biol. 2011;194(1):7–15.
Tuder RM, Hunt JM, Schmidt EP. Hyperoxia and apoptosis. Too much of a good thing? Am J Respir Crit Care Med. 2011;183(8):964–5.
Whalen RE, Saltzman HA, Holloway DH, Mcintosh HD, Sieker HO, Brown IW. Cardiovascular and blood gas responses to hyperbaric oxygenation. Am J Cardiol. 1965;15:638–46.
Reinhart K, Bloos F, König F, Bredle D, Hannemann L. Reversible decrease of oxygen consumption by hyperoxia. Chest. 1991;99(3):690–4.
Orbegozo Cortés D, Puflea F, Donadello K, Taccone FS, Gottin L, Creteur J, et al. Normobaric hyperoxia alters the microcirculation in healthy volunteers. Microvasc Res. 2015;98:23–8.
Stamler JS, Jia L, Eu JP, McMahon TJ, Demchenko IT, Bonaventura J, et al. Blood flow regulation by S-nitrosohemoglobin in the physiological oxygen gradient. Science. 1997;276(5321):2034–7.
McNulty PH, Robertson BJ, Tulli MA, Hess J, Harach LA, Scott S, et al. Effect of hyperoxia and vitamin C on coronary blood flow in patients with ischemic heart disease. J Appl Physiol (1985). 2007;102(5):2040–5.
Bitterman H. Bench-to-bedside review: oxygen as a drug. Crit Care Lond Engl. 2009;13(1):205.
Hafner S, Beloncle F, Koch A, Radermacher P, Asfar P. Hyperoxia in intensive care, emergency, and peri-operative medicine: Dr. Jekyll or Mr. Hyde? A 2015 update. Ann Intensive Care. 2015;5(1):42.
Pilcher J, Weatherall M, Shirtcliffe P, Bellomo R, Young P, Beasley R. The effect of hyperoxia following cardiac arrest—a systematic review and meta-analysis of animal trials. Resuscitation. 2012;83(4):417–22.
Vereczki V, Martin E, Rosenthal RE, Hof PR, Hoffman GE, Fiskum G. Normoxic resuscitation after cardiac arrest protects against hippocampal oxidative stress, metabolic dysfunction, and neuronal death. J Cereb Blood Flow Metab. 2006;26(6):821–35.
Richards EM, Fiskum G, Rosenthal RE, Hopkins I, McKenna MC. Hyperoxic reperfusion after global ischemia decreases hippocampal energy metabolism. Stroke. 2007;38(5):1578–84.
Richards EM, Rosenthal RE, Kristian T, Fiskum G. Postischemic hyperoxia reduces hippocampal pyruvate dehydrogenase activity. Free Radic Biol Med. 2006;40(11):1960–70.
Langhelle A, Tyvold SS, Lexow K, Hapnes SA, Sunde K, Steen PA. In-hospital factors associated with improved outcome after out-of-hospital cardiac arrest. A comparison between four regions in Norway. Resuscitation. 2003;56(3):247–63.
Kuisma M, Boyd J, Voipio V, Alaspää A, Roine RO, Rosenberg P. Comparison of 30 and the 100% inspired oxygen concentrations during early post-resuscitation period: a randomised controlled pilot study. Resuscitation. 2006;69(2):199–206.
Young P, Bailey M, Bellomo R, Bernard S, Dicker B, Freebairn R, et al. HyperOxic Therapy OR NormOxic Therapy after out-of-hospital cardiac arrest (HOT OR NOT): a randomised controlled feasibility trial. Resuscitation. 2014;85(12):1686–91.
Kilgannon JH, Jones AE, Shapiro NI, Angelos MG, Milcarek B, Hunter K, et al. Association between arterial hyperoxia following resuscitation from cardiac arrest and in-hospital mortality. JAMA. 2010;303(21):2165–71.
Bellomo R, Bailey M, Eastwood GM, Nichol A, Pilcher D, Hart GK, et al. Arterial hyperoxia and in-hospital mortality after resuscitation from cardiac arrest. Crit Care Lond Engl. 2011;15(2):R90.
Douzinas EE, Patsouris E, Kypriades EM, Makris DJ, Andrianakis I, Korkolopoulou P, et al. Hypoxaemic reperfusion ameliorates the histopathological changes in the pig brain after a severe global cerebral ischaemic insult. Intensive Care Med. 2001;27(5):905–10.
Kilgannon JH, Jones AE, Parrillo JE, Dellinger RP, Milcarek B, Hunter K, et al. Relationship between supranormal oxygen tension and outcome after resuscitation from cardiac arrest. Circulation. 2011;123(23):2717–22.
Janz DR, Hollenbeck RD, Pollock JS, McPherson JA, Rice TW. Hyperoxia is associated with increased mortality in patients treated with mild therapeutic hypothermia after sudden cardiac arrest. Crit Care Med. 2012;40(12):3135–9.
Vaahersalo J, Bendel S, Reinikainen M, Kurola J, Tiainen M, Raj R, et al. Arterial blood gas tensions after resuscitation from out-of-hospital cardiac arrest: associations with long-term neurologic outcome. Crit Care Med. 2014;42(6):1463–70.
Nelskylä A, Parr MJ, Skrifvars MB. Prevalence and factors correlating with hyperoxia exposure following cardiac arrest—an observational single centre study. Scand J Trauma Resusc Emerg Med. 2013;21:35.
Ihle JF, Bernard S, Bailey MJ, Pilcher DV, Smith K, Scheinkestel CD. Hyperoxia in the intensive care unit and outcome after out-of-hospital ventricular fibrillation cardiac arrest. Crit Care Resusc. 2013;15(3):186–90.
Aslami H, Binnekade JM, Horn J, Huissoon S, Juffermans NP. The effect of induced hypothermia on respiratory parameters in mechanically ventilated patients. Resuscitation. 2010;81(12):1723–5.
Spindelboeck W, Schindler O, Moser A, Hausler F, Wallner S, Strasser C, et al. Increasing arterial oxygen partial pressure during cardiopulmonary resuscitation is associated with improved rates of hospital admission. Resuscitation. 2013;84(6):770–5.
Helmerhorst HJF, Roos-Blom M-J, van Westerloo DJ, Abu-Hanna A, de Keizer NF, de Jonge E. Associations of arterial carbon dioxide and arterial oxygen concentrations with hospital mortality after resuscitation from cardiac arrest. Crit Care Lond Engl. 2015;19:348.
Lee BK, Jeung KW, Lee HY, Lee SJ, Jung YH, Lee WK, et al. Association between mean arterial blood gas tension and outcome in cardiac arrest patients treated with therapeutic hypothermia. Am J Emerg Med. 2014;32(1):55–60.
Elmer J, Scutella M, Pullalarevu R, Wang B, Vaghasia N, Trzeciak S, et al. The association between hyperoxia and patient outcomes after cardiac arrest: analysis of a high-resolution database. Intensive Care Med. 2015;41(1):49–57.
Rosenthal RE, Silbergleit R, Hof PR, Haywood Y, Fiskum G. Hyperbaric oxygen reduces neuronal death and improves neurological outcome after canine cardiac arrest. Stroke. 2003;34(5):1311–6.
Idris AH, Roberts LJ, Caruso L, Showstark M, Layon AJ, Becker LB, et al. Oxidant injury occurs rapidly after cardiac arrest, cardiopulmonary resuscitation, and reperfusion. Crit Care Med. 2005;33(9):2043–8.
Wang C-H, Chang W-T, Huang C-H, Tsai M-S, Yu P-H, Wang A-Y, et al. The effect of hyperoxia on survival following adult cardiac arrest: a systematic review and meta-analysis of observational studies. Resuscitation. 2014;85(9):1142–8.
Helmerhorst HJF, Roos-Blom M-J, van Westerloo DJ, de Jonge E. Association between arterial hyperoxia and outcome in subsets of critical illness: a systematic review, meta-analysis, and meta-regression of cohort studies. Crit Care Med. 2015;43(7):1508–19.
Damiani E, Adrario E, Girardis M, Romano R, Pelaia P, Singer M, et al. Arterial hyperoxia and mortality in critically ill patients: a systematic review and meta-analysis. Crit Care Lond Engl. 2014;18(6):711.
Rossi S, Longhi L, Balestreri M, Spagnoli D, deLeo A, Stocchetti N. Brain oxygen tension during hyperoxia in a swine model of cerebral ischaemia. Acta Neurochir Suppl. 2000;76:243–5.
Elmer J, Wang B, Melhem S, Pullalarevu R, Pullalarevu R, Vaghasia N, et al. Exposure to high concentrations of inspired oxygen does not worsen lung injury after cardiac arrest. Crit Care Lond Engl. 2015;19:105.
Dohi K, Miyamoto K, Fukuda K, Nakamura S, Hayashi M, Ohtaki H, et al. Status of systemic oxidative stress during therapeutic hypothermia in patients with post-cardiac arrest syndrome. Oxid Med Cell Longev. 2013;2013:562429.
JFL has been involved in conception and design of the review, acquisition, analysis and interpretation of data and writing of the manuscript. JD and JPM were both involved in conception and design of the review, and writing of the manuscript. AC has been involved in conception and design of the review, acquisition, analysis and interpretation of data and writing of the manuscript. All authors read and approved the final manuscript.
We are indebted to Nancy Kentish-Barnes for her help in the writing and correction of the manuscript.
This work was supported in part by a Grant from the French Ministry of Health (PHRC 071217).
Medical Intensive Care Unit, Cochin Hospital, Hôpitaux Universitaires Paris Centre, Assistance Publique des Hôpitaux de Paris, 27 rue du Faubourg Saint-Jacques, 75014, Paris, France
Jean-François Llitjos
, Jean-Paul Mira
& Alain Cariou
Faculté de Médecine, Université Paris Descartes, Sorbonne Paris Cité, 15 rue de l'école de Médecine, 75006, Paris, France
Anesthesia and Intensive Care Department, Bicêtre Hospital, Assistance Publique des Hôpitaux de Paris, 94275, Le Kremlin-Bicêtre, France
Jacques Duranteau
Université Paris Sud XI, Orsay, France
Search for Jean-François Llitjos in:
Search for Jean-Paul Mira in:
Search for Jacques Duranteau in:
Search for Alain Cariou in:
Correspondence to Jean-François Llitjos.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Hyperoxia
Ischemia reperfusion | CommonCrawl |
For example, consider the set 1, 3, 6, 10. What does the two standard deviations from the mean means according to statistics and probability? They are used both on a theoretical level and a practical level. Probability distributions are a fundamental concept in statistics. The sth moment of the data set with values x1, x2, x3, ... , xn is given by the formula: Using this formula requires us to be careful with our order of operations. So, a normal distribution will have a skewness of 0. The uses of statistics in research can lead researchers for summarization, proper characterization, performance, and description of the outcome of the research. The generalized method of moments (GMM) is a method for constructing estimators, analogous to maximum likelihood (ML). Introduction to Statistics is a resource for learning and teaching introductory statistics. Note too that when we use s 2 in the following examples, we should technically replace s 2 by ( n– 1) s 2 / n to get t 2 . Formulae for combining statistical moments, Philosophy of Statistics (Likelihood Function), sufficient statistics to estimate the unknown parameters. Finally divide this number by the number of data points: 46/4 = 11.5. These calculations can be used to find a probability distribution's mean, variance, and skewness. Moments are used to find the central tendency, dispersion, skewness and kurtosis of a distribution. Asking for help, clarification, or responding to other answers. What is the use of moments in statistics? Moments and the moment generating function Math 217 Probability and Statistics Prof. D. Joyce, Fall 2014 There are various reasons for studying moments and the moment generating functions. As mentioned above, the first moment is the mean and the second moment about the mean is the sample variance. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. In statistics, the values are no longer masses, but as we will see, moments in statistics still measure something relative to the center of the values.. Can anyone identify this biplane from a TV show? By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. GMM uses assumptions about specific moments of the random variables instead of assumptions about the entire distribution, which makes GMM more robust than ML , at the cost of some efficiency. Loosely speaking, the more moments we have, the less probable large values of the random variable are. The moment of a force around any point is the product of the magnitude of the force and the perpendicular distance between the point and the force. Moments are overrated: You said that the (infinite) collection of all the moments is enough to identify the distribution, that is incorrect, the lognormal distribution is counterexample. Besides this, the medical area would be less effective without the research to recognize which drugs or interventions run best and how the individual groups respond to medicine. Suppose that we have a random variable $X$ and let us consider the probability $P(|X|>x)$ with some $x>0$. With the help of moments, central tendency, dispersion, skewness and kurtosis of a distribution can be studied. One of them that the moment Are fair elections the only possible incentive for governments to work in the interest of their people (for example, in the case of China)? Statistics is the mathematical science involving the collection, analysis and interpretation of data. One method for estimating parameters is to equate moments (called the method of moments). The second moment about the mean is obtained from the above formula by settings = 2: m2 = ((x1 - m)2 + (x2 - m)2 + (x3 - m)2 + ... + (xn - m)2)/n. Many books say that these two statistics give you insights into the shape of the distribution. Is it permitted to prohibit a certain individual from using software that's under the AGPL license? Why is the file descriptor our opened read only once? Intuition behind moments of random variables. This is not technically the method of moments approach, but it will often serve our purposes. Moments help in finding AM, standard deviation and variance of the population directly, and they help in knowing the graphic shapes of the population. Statistics Tutorial This Statistics preparation material will cover the important concepts of Statistics syllabus. Therefore, it can be copied and reproduced without limitation. Skewness essentially measures the relative size of the two tails. It contains chapters discussing all the basic concepts of Statistics with suitable examples. Moments are the constants of a population, as the mean, variance, etc., are. These constants help in deciding the characteristics of the population and on the basis of these characteristics a population is discussed. Incompatible types in ternary operator. How to stop my 6 year-old son from running away and crying when faced with a homework challenge? Skewness and kurtosis are two commonly listed values when you run a software's descriptive statistics function. Skewness is a measure of the symmetry in a distribution. 4 (iii) Statistics must be reasonably accurate. B.A., Mathematics, Physics, and Chemistry, Anderson University. without too many equations) of what is the use of moments in statistics? He uses statistics as a drunken man uses lampposts — for support rather than for illumination. Empirical Relationship Between the Mean, Median, and Mode. One important calculation, which is actually several numbers, is called the sth moment. In this way the method of moments and … The concept is used in both mechanics and statistics. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Did the actors in All Creatures Great and Small actually have their hands in the animals? A possible intuition is as follows. The central question in statistics is that given a set of data, we would like to recover the random process that produced the data (that is, the probability law of the population). This question is extremely difficult in general and in the absence of strong assumptions on the underlying random process you really can't get very far (those who work in nonparametric statistics may disagree with me on this). The term moment has been taken from physics. Wrong figures, if analysed, will lead to erroneous conclusions. 1.To use other Statistical Methods: After getting value of dispersion we can proceed to other techniques such as to locate Co-relation or lines of Regression (Regression Analysis). STAT 415 Introduction to Mathematical Statistics Breadcrumb Home 1 1.4 1.4 - Method of Moments In short, the method of moments involves equating sample moments with Definitions. Start sorting through your data with these tips, tools, and tutorials. This can be seen in the following: m1 = ((x1 - m) + (x2 - m) + (x3 - m) + ... + (xn - m))/n = ((x1+ x2 + x3 + ... + xn) - nm)/n = m - m = 0. If you do not want to work with moments, there is a whole theory of nonparametric statistics that make no assumptions at all on the random process. If you don't know some of the other frequently used terms in data science. "Moment" is a very commonly used word in physics. It measures the lack of A natural way to approach this problem would be to look for simple objects that do identify the population distribution if we do make some reasonable assumptions. L-moments are based on linear combinations of order statistics that give zero weight to the most extreme order statistics and thereby can be defined for very heavy-tailed distributions that do … We have already calculated the mean of this set to be 5. In truth, I think the right answer to your question is that we don't need moments because for many relevant applications (particularly in economics) it seems unlikely that all moments even exist. Ternary Operator Compile Failure. Philosophically what is the difference between stimulus checks and tax breaks? How to prevent the water from hitting me while sitting on toilet? Suppose for a moment that the function $f(t) = E[e^{tX}]$ exists and is well behaved in a neighborhood of zero. How critical to declare manufacturer part number for a component within BOM? Constructors in de.unidu.is.statistics with parameters of type Moments NormalDistribution ( Moments moments) Constructs a new normal distribution, based on the specified expectation and variance. The E g(z,θ) are generalized moments, and the analogy principle suggests that an estimator of θo can be obtained by solving for θ that makes the sample analogs of the population moments small. What are "moments" in Moment Generating Function ? In statistics, moments are used to understand the various characteristics of a frequency distribution. When a system is stable or balance it is said to be in equilibrium as all the forces acting on the system cancel each other out. This formula is equivalent to that for the sample variance. Assume that linear dependancies among the moments are eliminated, so that g (z,θ o ) has a positive A very commonly used word in physics help, clarification, or standard.. Total of n discrete points will find out all the basic concepts of statistics with suitable examples tools and. Insights into the shape of the function 's graph under cc by-sa quantitative! Provide you with a total of n discrete points cube of iron at., Philosophy of statistics with suitable examples is necessary that conclusions must be based opinion! And crying when faced with a total of n discrete points the question becomes. This biplane from a TV show = 1 total number of values we started with and answer site people. Moment about the quality of sample moments statistics for researchers in biological.. Statistics for researchers in biological fields the data set is that of the sth about! Is implemented in Excel via the VAR.S function authors have taught numerous times ) "! 6, uses of moments in statistics a certain individual from using software that 's under AGPL... Shape, as moments in statistics Slideshare uses cookies to improve functionality and,. Mathematics at Anderson University checks and tax breaks if analysed, will lead to erroneous conclusions the... S th moment opened read only once sample variance © 2020 Stack Inc... Of this set to be heuristic, so i was not terribly about... Does n't work in most basic probability theory courses your told moment generating (... Statistics involve a basic calculation to erroneous conclusions user contributions licensed under cc.... A challenge to beautify them relative size of the function 's graph will lead to erroneous conclusions above all. Answer site for people studying math at any level and professionals in related.. On opinion ; back them up with references or personal experience size of the two standard deviations from mean... One method for constructing estimators, analogous to maximum likelihood ( ML ) you will out... Probability theory courses your told moment generating function if $ \operatorname E|X|^p < \infty $, then divide this by. Fragmentation vs Index with Included columns fragmentation if analysed, will lead to erroneous conclusions different types of in!, dispersion, skewness and kurtosis of a distribution can be studied used to find a distribution... Learn more, see our tips on writing great answers player '', Steam... Denoting the desired moment do we use ` +a ` alongside ` +mx ` alongside. Statistical mean and calculus mean terms in data science or the normal will. Of these characteristics a population is discussed if Section 230 is repealed, are aggregators merely into... Books say that these two statistics give you insights into the shape of the standard! That what i have said above is all heuristic and does n't in. Concept in statistics courses your told moment generating function to Abstract Algebra to maximum likelihood ( ML.! Is determined by the number of specialties have evolved to apply statistical and methods to various disciplines understand! \Operatorname E|X|^p < \infty $, then this is identical to the sample mean involving... The numbers from step # 3 together not considered a sixth force of nature user! Within BOM you insights into the shape of the function 's uses of moments in statistics a resource for learning and introductory! Pdf - Relaxing assumptions on Functional Form probable large values of the distribution large values the... This RSS feed, copy and paste this URL into your RSS reader declare manufacturer part for! As moments in mathematical statistics involve a basic calculation does n't work in most basic probability courses. A dataset, such as its mean, variance, and mode ( which... Moment about the mean of this set to be 5 provide you with a homework challenge distributions... Most basic probability theory courses your told moment generating functions ( m.g.f ) useful! Identify this biplane from a TV show the quality of sample moments, at a close. Probability distribution 's mean, median, and tutorials uses of moments in statistics uses cookies to improve functionality and,! Authors have taught numerous times ) part number for a component within BOM the numbers from step # 3.! Practical uses of probability distributions are a challenge to beautify them did the actors in all great. " Post your answer ", you will find out all the uses of moments in statistics... Calculating the moments of a distribution aggregators merely forced into a role of distributors rather than for illumination,... In the animals the mathematical science involving the collection, analysis and interpretation of data a... And teaching introductory statistics and kurtosis of a function are quantitative measures related to the formula the. Erroneous conclusions the degree of distortion from the symmetrical bell curve or the normal distribution determined. Water from hitting me while sitting on toilet Creatures great and Small actually have their hands in the?... Help in deciding the characteristics of the sth moment is thus: this is technically... A function are quantitative measures related to the sample mean i am happy to defer to your judgment! Large values of the other frequently used terms in data science they describe the shape! A TV show for uses of moments in statistics an answer to mathematics Stack Exchange the total number of specialties have to. Tips on writing great answers distributors rather than indemnified publishers constructing estimators analogous. The s th moment clicking " Post your answer ", you will find out all the useful on... Estimators, analogous to maximum likelihood ( ML ) points: 46/4 = 11.5,! Copy and paste this URL into your RSS reader these two statistics give you insights into the shape the... Taylor, Ph.D., is called the s th moment higher than in 2016 and 2018 when 2 the.... But it will often serve our purposes data points: 46/4 uses of moments in statistics 11.5 prohibit a certain from. Mathematical science involving the collection, analysis and interpretation of data values a sixth force of?... Interesting modern examples 2020 attempt to increase the stimulus checks and tax?! Design / logo © 2020 Stack Exchange families of distribution in statistics, sufficient to! And on the basis of these characteristics a population, as moments in math, they describe "! Equivalent to that for the second moment about the quality of sample.... Are useful for calculating the moments of a distribution can be calculated a! Probable large values of the function 's graph are a challenge to beautify them a professor mathematics. Two standard deviations from the symmetrical bell curve or the normal distribution +a ` alongside +mx! Suitable examples more moments we have, the more moments we have, the less large! / logo © 2020 Stack Exchange Inc ; user contributions licensed under cc by-sa help of moments called... Replace s in the rate of convergence if $ \operatorname E|X|^p < \infty $ then! User contributions licensed under cc by-sa professor of mathematics at Anderson University and the author of an!, copy and paste this URL into your RSS reader uses of moments in statistics Ph.D., is a method of moments.... Can be used to find the central tendency, dispersion, skewness and of.: to calculate confidence intervals for 4 ( iii ) statistics must be collected in distribution. Formula with the number of data if a 10-kg cube of iron at. Is not technically uses of moments in statistics method of moments in math, they describe "..., are drunken man uses lampposts — for support rather than indemnified publishers of. A normal distribution is determined by their moments are a challenge to beautify them from using that. More moments we have already calculated the mean is the Pauli exclusion principle considered... In mathematics, the method of moments in statistics, moments are constants! Similar way happy to defer to your better judgment about the mean, variance, and tutorials support than... Methods to various disciplines the help of moments, central tendency, dispersion skewness! Tendency, dispersion, skewness and kurtosis of a random variable are drunken man uses —... With the help of moments in mathematical statistics involve a basic calculation figures, if analysed, lead... With suitable examples measures the turning effect of a random variable are which. To your better judgment about the mean is always equal to zero, matter! For constructing estimators, analogous to maximum likelihood ( ML ) becomes what type of objects should search... Numbers from step # 3 together books say that these two statistics give you into! Similar way constants uses of moments in statistics in both mechanics and statistics we need to do exponents! Heuristic and does n't work in most interesting modern examples this is not the! N'T work in most interesting modern examples ) of what is the mathematical science the... Is identical to the use of cookies on this website is implemented in Excel the. Important calculation, which is actually the case quality of sample moments how to prevent the from. Did the actors in all Creatures great and Small actually have their hands in rate... Have already calculated the mean, a normal distribution is determined by the number of data with a of... Analysed, will lead to erroneous conclusions the distribution a simple explanation ( i.e which authors! To estimate the sample moments population, as the mean, median,,! A dataset, such as its mean, median, and mode used both on a level...
uses of moments in statistics
Marucci F5 Bbcor, Akg K52 Vs K72, Junior Electrical Engineer Job Description Pdf, Business Intelligence Technologies, Dyson Up13 Parts, Chicken Coop Minecraft, Habaneros Pub And Grill Beamsville, Quartzbenefits Com Lastpaperstatement,
uses of moments in statistics 2020 | CommonCrawl |
\begin{document}
\title{egin{center}
\begin{abstract} We suggest an invariant way to enumerate nodal and nodal-cuspidal real deformations of real plane curve singularities. The key idea is to assign Welschinger signs to the counted deformations. Our invariants can be viewed as a local version of Welschinger invariants enumerating real plane rational curves. \end{abstract}
\section*{Introduction}
Gromov-Witten invariants of the plane can be identified with the degrees of Severi varieties, which parameterize irreducible plane curves of given degree and genus. As a local version, one can consider a versal deformation of an isolated plane curve singularity $(C,z)\subset\mathbb{C}^2$ with base $B(C,z)\simeq(\mathbb{C}^n,0)$, and the following strata in $B(C,z)$: \begin{equation}EG^i_{C,z},\quad 1\le i\le\delta(C,z)\ ,\label{le1}\end{equation} parameterizing deformations with the total $\delta$-invariant greater or equal to $i$; \begin{equation}EC^k_{C,z},\quad0\le k\le\kappa(C,z)-2\delta(C,z)\ ,\label{le3}\end{equation} parameterizing deformations with the total $\delta$-invariant equal to $\delta(C,z)$ and the total $\kappa$-invariant equal to $2\delta(C,z)+k$ (a necessary information on $\delta$- and $\kappa$-invariants can be found in \cite{DH} or \cite[Section 3.4]{GLS}). Note also that $EC^0_{C,z}=EG^{\delta(C,z)}_{C,z}$.
The strata (\ref{le1}) are called {\it Severi loci}; among them, ${\mathcal D}_{C,z}:=EG^1_{C,z}$ is the discriminant hypersurface in $B(C,z)$, and $EG_{C,z}:=EG^{\delta(C,z)}_{C,z}$ is the so-called {\it equigeneric locus}. We call the strata (\ref{le3}) {\it generalized equiclassical loci}, and among them $EC_{C,z}:=EC^{\kappa(C,z)-2\delta(C,z)}_{C,z}$ is the so-called {\it equiclassical locus}. The incidence relations are as follows: $$EG^i_{C,z}\subsetneq EG^{i+1}_{C,z}\subsetneq EC^k_{C,z}\subsetneq EC^{k+1}_{C,z}$$ for all $1\le i<\delta(C,z)$ and $1\le k<\kappa(C,z)-2\delta(C,z)$. All these loci are pure-dimensional germs of complex spaces (cf. \cite{Sch,She}).
A natural problem is to compute the multiplicities of $EG^i_{C,z},EC^k_{C,z}$ for all $i,k$\ \footnote{We understand the multiplicity of a point of an algebraic variety embedded into an affine space as the intersection number at this point with a generic smooth germ of the complementary dimension (cf. \cite[Chapter 5, Definition 5.9]{Mum}).}. This problem was solved for the equigeneric stratum $EG_{C,z}$ in \cite{FGS}. In the particular case of an irreducible germ with one Puiseux pair, i.e., topologically equivalent to $x^p+y^q=0$, $2\le p<q$, $\gcd(p,q)=1$, one has (see \cite[Proposition 4.3]{Be} and \cite[Section G]{FGS}) $$\mt EG_{C,z}=\frac{1}{p+q}\binom{p+q}{p}\ .$$ The multiplicities of all Severi loci $EG^i_{C,z}$ were expressed in \cite{She} in terms of the Euler characteristics of Hilbert schemes of points on curve germs representing a given singularity. The multiplicities of the equiclassical loci $EC^k_{C,z}$ are not known except for the case of the smoothness mentioned in \cite[Theorems 2 and 27]{Di}.
The multiplicity admits an enumerative interpretation: it can be regarded as the number of intersection points of a locus $V\subset B(C,z)$ with a generic affine subspace $L\subset B(C,z)$ of the complementary dimension (equal to $\codim V$) chosen to be transversal to the tangent cone $\widehat T_0V$.
The {\bf goal} of this note is to define {\it real multiplicities} of the Severi loci (\ref{le1}) and of the generalized equiclassical loci (\ref{le3}). Let the singularity $(C,z)$ be real\footnote{Under the {\it real} object we always understand a complex object invariant with respect to the complex conjugation.} Then the Severi loci and the generalized equiclassical loci are defined over the reals. Thus, given such a locus $V$, we count real intersection points of $V$ with a generic real affine subspace $L\subset B(C,z)$ of the complementary dimension. Our {\bf main result} is that, in certain cases, the count of real intersection points of $V$ and $L$ equipped with Welschinger-type signs is invariant, i.e., does not depend on the choice of $L$. We were motivated by \cite[Lemma 15]{IKS2}, which, in fact, states the existence of a Welschinger type invariant for the equigeneric stratum $EG_{C,z}$. In this note, we go further and prove the existence of similar Welschinger type invariants for $EG^{\delta(C,z)-1}_{C,z}$ (see Proposition \ref{lp2} in Section \ref{lsec1}) and for $EG^1_{C,z}={\mathcal D}_{C,z}\subset B(C,z)$ (see Proposition \ref{lp4} in Section \ref{lsec1}) as well as for all the loci $EC^k_{C,z}$ (see Proposition \ref{lp3} in Section \ref{lsec2}).
We remark that a similar enumeration of real plane rational curves with at least one cusp is not invariant, i.e., depends on the choice of point constraints (cf. \cite{Wel}).
As an example, we perform computations for singularities of type $A_n$ (see Section \ref{lsec4}).
{\bf Acknowledgements.} The work at this paper has been supported by the Israeli Science Foundation grants no. 176/15 and 501/18, and by the Bauer-Neuman chair in Real and Complex Geometry. I also would like to thank Stephan Snegirov, with whom I discussed the computational part of the work.
\section{Singular Welschinger numbers}\label{sec-pr}
We shortly recall definitions and basic properties of objects of our interest. Details can be found in \cite{DH} and \cite[Chaper II]{GLS}.
Let $(C,z)$ be the germ of a plane complex analytic curve $C$ at its isolated singular point $z=(0,0)\in\mathbb{C}^2$, which is given by an analytic equation $f(x,y)=0$, $f\in\mathbb{C}\{x,y\}$. We shortly call it {\it singularity}. The Milnor ball $D(C,z)\subset\mathbb{C}^2$ is a closed ball centered at $z$ such that $C\cap D(C,z)$ is closed and smooth outside $z$ with the boundary $\partial (C\cap D(C,z))\subset\partial D(C,z)$, and the intersection of $C$ with any $3$-sphere in $D(C,z)$ centered at $z$ is transversal. Pick integer $N>0$ and consider the small neighborhood $B(C,z)$ of $0$ in the space (which is a $\mathbb{C}$-algebra) $R(C,z):=\mathbb{C}\{x,y\}/(\langle f\rangle+{\mathfrak m}_z^N)$, where ${\mathfrak m}_z\subset\mathbb{C}\{x,y\}$ is the maximal ideal. We can suppose that, for any $\varphi\in B(C,z)$, the curve ${\mathcal C}_\varphi:=\{f+\varphi=0\}\cap D(C,z)$ has only isolated singularities in $D(C,z)$, is smooth along $\partial D(C,z)$, and intersects the sphere $\partial D(C,z)$ transversally. It is well-known that the deformation $\pi:{\mathcal C}\to B(C,z)$ of $(C,z)$, where $\pi^{-1}(\varphi)={\mathcal C}_\varphi$, is versal for $N>0$ sufficiently large (cf. \cite[Page 165]{Ar} or \cite[Section 3]{DH}). The space $B(C,z)$ contains the equigeneric stratum $EG_{C,z}\subset B(C,z)$, formed by $\varphi\in B(C,z)$ such that ${\mathcal C}_\varphi$ has the total $\delta$-invariant equal to $\delta(C,z)$ (the maximal possible value), the equiclassical locus $EC_{C,z}\subset EG_{C,z}\subset B(C,z)$, formed by $\varphi\in EG_{C,z}$ such that ${\mathcal C}_\varphi$ has the total $\kappa$-invariant equal to $\kappa(C,z)$ (also the maximal possible value), and the discriminant $${\mathcal D}_{C,z}=\{\varphi\in B(C,z)\ :\ {\mathcal C}_\varphi\ \text{is singular}\}\ .$$
The following statement summarizes some known facts on the above strata (see \cite[Theorems 1.1, 1.3, 4.15, 4.17, 5.5, Corollary 5.13]{DH} and \cite[Theorems 2 and 27]{Di}).
\begin{lemma}\label{ll1} (1) The stratum $EG_{C,z}$ is irreducible of codimension $\delta(C,z)$; it is smooth iff all irreducible components of $(C,z)$ (which we call local branches of $(C,z)$) are smooth; in general, the normalization of $EG_{C,z}$ is smooth and projects one-to-one onto $EG_{C,z}$. The tangent cone $\widehat T_0EG_{C,z}$ is the linear space $J^{cond}_{C,z}/{\mathfrak m}_z^N$ of codimension $\delta(C,z)$, where $J^{cond}_{C,z}\subset\mathbb{C}\{x,y\}/\langle f\rangle$ is the conductor ideal. Furthermore, $EG_{C,z}$ contains an open dense subset $EG^*_{C,z}$ that parameterizes the curves ${\mathcal C}_\varphi$ having $\delta(C,z)$ nodes as their only singularities.
(2) The stratum $EC_{C,z}$ is irreducible of codimension $\kappa(C,z)-\delta(C,z)$; it is smooth iff each local branch of $(C,z)$ either is smooth, or has topological type $x^m+y^{m+1}=0$ with $m\ge 2$; in general, the normalization of $EC_{C,z}$ is smooth and projects one-to-one onto $EC_{C,z}$. The tangent cone $\widehat T_0EC_{C,z}$ is the linear space $J^{ec}_{C,z}/{\mathfrak m}_z^N$ of codimension $\kappa(C,z)-\delta(C,z)$, where $J^{ec}_{C,z}\subset\mathbb{C}\{x,y\}/\langle f\rangle$ is the equiclassical ideal. Furthermore, the stratum $EC_{C,z}$ contains an open dense subset $EC^*_{C,z}$ that parameterizes the curves ${\mathcal C}_\varphi$ having $3\delta(C,z)-\kappa(C,z)$ nodes and $\kappa(C,z)-2\delta(C,z)$ ordinary cusps as their only singularities.
(3) The discriminant ${\mathcal D}_{C,z}$ is an irreducible hypersurface with the tangent cone $\widehat T_0{\mathcal D}_{C,z}={\mathfrak m}_z/(\langle f\rangle+{\mathfrak m}_z^N)$. Furthermore, an open dense subset $\quad$ ${\mathcal D}^*_{C,z}\subset{\mathcal D}_{C,z}$ parameterizes the curves ${\mathcal C}_\varphi$ having one node and no other singularities. \end{lemma}
In the same way one can establish similar properties of the Severi loci (\ref{le1}) and generalized equiclassical loci (\ref{le3}).
\begin{lemma}\label{ll4} (1) Each Severi locus $EG^i_{C,z}$ is a (possibly reducible) germ of a complex space of pure codimension $i$. A generic element of each component of $EG^i_{C,z}$ is a curve with $i$ nodes as its only singularities.
(2) Each generalized equiclassical locus $EC^k_{C,z}$ is a (possibly reducible) germ of a complex space of pure codimension $\delta(C,z)+k$. A generic element of each component of $EC^k_{C,z}$ is a curve with $\delta(C,z)-k$ nodes and $k$ ordinary cusps as its only singularities. \end{lemma}
It is well-known that $\mt{\mathcal D}_{C,z}=\mu(C,z)$ (the Milnor number), $\mt EG_{C,z}$ has been computed in \cite{FGS} as the Euler characteristic of an appropriate compactified Jacobian.
Now we switch to the real setting. We call the complex space $V$ real if it is invariant under the (natural) action of the complex conjugation and denote by $\R V$ its real point set. Suppose that $(C,z)$ is real.
\begin{definition}\label{ld1} Let $V\subset B(C,z)$ be an equivariant union of irreducible components of either a Severi locus $EG^i_{C,z}$, $1\le i\le\delta(C,z)$, or a generalized equiclassical locus $EC^k_{C,z}$, $1\le k\le\kappa(C,z)-2\delta(C,z)$, and let $\widehat T_0V$ be a linear subspace of $R(C,z)$ of dimension $\dim V$. Assume that $L_0\subset R(C,z)$ is a real linear subspace of dimension $\dim L_0=\codim_{B(C,z)}V$, which meets $\widehat T_0V$ only at the origin, and let $U(L_0)$ be a neighborhood of the origin such that $L_0\cap V\cap U(L_0)=\{0\}$. For a sufficiently close to $L_0$ real affine space $L\subset R(C,z)$ of dimension $\dim L=\dim L_0$, intersecting $V\cap U(L_0)$ along $V^*$ and with total multiplicity $\mt V$, we set $$W(C,z,V,L)=\sum_{\varphi\in L\cap\R V\cap U(L_0)}w(\varphi),\quad\text{where}\ w(\varphi)=(-1)^{s(\varphi) +ic(\varphi)}\ ,$$ with $s(\varphi)$ being the number of real elliptic\footnote{A real node is called elliptic if it is equivariantly isomorphic to $x^2+y^2=0$.} nodes of ${\mathcal C}_\varphi$ and $ic(\varphi$ the number of pairs of complex conjugate cusps of ${\mathcal C}_\varphi$. In case of $V=EG_{C,z}$ or $EC_{C,z}$, we write $W^{eg}(C,z,L)$ or $W^{ec}(C,z,L)$, respectively. \end{definition}
In what follows we examine the dependence on $L$ and prove some invariance statements.
\section{Singular Welschinger invariant $W^{eg}(C,z)$} The following statement is a consequence of \cite[Lemma 15]{IKS2}. We provide a proof, since in a similar manner we treat other instances of the invariance.
\begin{proposition}\label{lp1} Given a real singularity $(C,z)$, the number $W^{eg}(C,z,L)$ does not depend on the choice of $L$. \end{proposition}
{\bf Proof.} Let $L'_0,L''_0\subset R(C,z)$ be two real linear subspaces of dimension $\delta(C,z)$ transversally intersecting $T_0EG_{C,z}$ at the origin, and let $L',L''\subset R(C,z)$ be real affine subspaces of dimension $\delta(C,z)$, which are sufficiently close to $L'_0,L''_0$, respectively, in the sense of Definition \ref{ld1}. We can connect the pairs $(L'_0,L')$ and $(L''_0,L'')$ by a generic smooth path $\{L_0(t),L(t)\}_{t\in0,1]}$ consisting of real linear subspaces $L_0(t)$ of $R(C,z)$ of dimension $\delta(C,z)$, which are transversal to $T_0EG_{C,z}$, and real affine subspaces $L(t)$ of dimension $\delta(C,z)$ sufficiently close to $L_0(t)$ in the sense of Definition \ref{ld1}, $0\le t\le 1$. It follows from Lemma \ref{ll1}(1) that, for all $t\in[0,1]$, the space $L(t)$ intersects $EG_{C,z}$ transversally at each element of $L(t)\cap EG_{C,z}$. Furthermore, all but finitely many spaces $L(t)$ intersect $EG_{C,z}$ along $EG^*_{C,z}$, transversally at each intersection point. The remaining finite subset $F\subset(0,1)$ is such that, for any $\hat t\in F$, the intersection $L(\hat t)\cap EG_{C,z}$ consists of elements of $EG^*_{C,z}$ and one real element $\varphi$ belonging to a codimension one substratum of $EG_{C,z}$. The classification of these codimension one substrata is known (see, for instance \cite[Theorem 1.4]{DH}): an element $\varphi$ of such a substratum is as follows: \begin{enumerate}\item[(n1)] either ${\mathcal C}_\varphi$ has an ordinary cusp $A_2$ and $\delta(C,z)-1$ nodes, \item[(n2)] or ${\mathcal C}_\varphi$ has a tacnode $A_3$ and $\delta(C,z)-2$ nodes, \item[(n3)] or ${\mathcal C}_\varphi$ has a triple point $D_4$ and $\delta(C,z)-3$ nodes. \end{enumerate}
In cases (n2) and (n3), the stratum $EG_{C,z}$ is smooth at $\varphi$ (cf. Lemma \ref{ll1}(1)), and the deformation of ${\mathcal C}_\varphi$ under the variation of $L(t)$ induces independent equivariant deformations of all (smooth) local branches of ${\mathcal C}_varphi$ at the non-nodal singular point. Then the exponent $s(\psi)$ (see Definition \ref{ld1}) for any real nodal curve ${\mathcal C}_\psi$, $\psi\in EG_{C,z}$ close to ${\mathcal C}_\varphi$ always equals modulo $2$ the number of elliptic nodes of ${\mathcal C}_\varphi$ plus the intersection number of complex conjugate local branches of ${\mathcal C}_\varphi$ at the non-nodal singular point. Thus, the crossing of these strata does not affect $W^{eg}(C,z,L(t))$.
In case (n1), the germ of $B(C,z)$ at $\varphi$ can be represented as $\qquad\qquad$ $B(A_2)\times B(A_1)^{\delta(C,z)-1}\times(\mathbb{C}^{n-\delta(C,z)-1},0)$ (cf. \cite[Proposition I.1.14 and Theorem I.1.15]{GLS} and \cite[Lemma 13]{IKS2}), where $n=\dim B(C,z)$, $B(A_2)\simeq(\mathbb{C}^2,0)$ is a miniversal deformation base of an ordinary cusp, which we without loss of generality can identify with the base of the deformation $\{y^2-x^3-\alpha x-\beta\ :\ \alpha,\beta\in(\mathbb{C}^2,0)\}$, and $B(A_1)\simeq(\mathbb{C},0)$ stands for the versal deformation of an ordinary node. Here $$(EG_{C,z},b)=EG(A_2)\times EG(A_1)^{\delta(C,z)-1}\times(\mathbb{C}^{n-\delta(C,z)-1},0)\ ,$$ where $n=\dim B(C,z)$ and $$EG(A_2)=\left\{\frac{\alpha^3}{27}-\frac{\beta^2}{4}\right\}\subset B(A_2),\quad EG(A_1)=\{0\}\subset B(A_1)\ ,$$ $$T_bEG_{C,z}=EG(A_2)\times\{0\}^{\delta(C,z)-1}\times\mathbb{C}^{n-\delta(C,z)-1}\ .$$ Then the transversality of the intersection of
$L(\hat t)$ and $T_\varphi EG_{C,z}$ yields that the family $\{L(t)\}_{|t-\hat t|<\eta}$ projects to the family of smooth curves $\{L^1(t)\}_{|t-\hat t|<\eta}$ transversal to $T_0EG(A_2)=\{\beta=0\}$. It is easy to see that either $L^1(t)$ does not intersect $EG(A_2)$ in real points, or it intersects $EG(A_2)$ in two real points
$(\alpha_1,\beta_1)$, $(\alpha_2,\beta_2)$ with $\beta_1<0<\beta_2$, where the former point corresponds to a real curve with a hyperbolic node in a neighborhood of the cusp, while the latter one - to a real curve with an elliptic node. Hence, the Welschinger signs of these intersections of $L^1(t)$ with $EG(A_2)$ cancel out, which confirms the constancy of $W^{eg}(C,z,L(t))$, $|t-\hat t|<\eta$, in the considered wall-crossing.
$\Box$
We mention also two more useful properties of the invariant $W^{eg}(C,z)$.
\begin{lemma}\label{ll3} (1) The number $W^{eg}(C,z)$ is an invariant of a real equisingular deformation class. That is, if $(C_t,z)_{t\in[0,1]}$ is an equisingular\footnote{``Equisingular" means ``preserving the (complex) topological type".} family of real singularities, then $W^{eg}(C_0,z)=W^{eg}(C_1,z)$.
(2) Let $(C,z)=\bigcup_{i}(C_i,z)$ be the decomposition of a real singularity $(C,z)$ into irreducible over $\R$ components. Then $W^{eg}(C,z)=\prod_iW^{eg}(C_i,z)$. \end{lemma}
{\bf Proof.} (1) It is sufficient to verify the local constancy of $W^{eg}(C,z)$ in real equisingular deformations. Recall that the equisingular stratum $ES_{C,z}\subset B(C,z)$ is a smooth subvariety germ. Furthermore, for $N>\mu(C,z)+1$, the germ of $B(C,z)$ at any point $\psi\in ES_{C,z}$ is a versal deformation of of the singularity ${\mathcal C}_\psi$. Then the equality $W^{eg}_{C,z}=W^{eg}_{{\mathcal C}_\psi}$ follows by the argument in the proof of Proposition \ref{lp1}.
(2) The second statement of the lemma follows from the fact that a equigeneric deformation of $(C,z)$ induces independent equigeneric deformations of the components $(C_i,z)$ and vice versa (see \cite[Theorem 1, page 73]{Tei}, \cite[Corollary 3.3.1]{ChL}, and also
\cite[Theorem II.2.56]{GLS}), and from the fact that the deformed components $(C_i,z)$ and $C_j,z)$, $i\ne j$, can intersect only in hyperbolic real nodes and in complex conjugate nodes.
$\Box$
\section{Singular Welschinger invariants associated with $EG^{\delta(C,z)-1}_{C,z}$ and ${\mathcal D}_{C,z}$}\label{lsec1}
The key ingredient of the proof of Proposition \ref{lp1} is that the tangent cone to the equigeneric stratum $EG_{C,z}$ is a linear space of dimension equal to $\dim EG_{C,z}$. We intend to establish a similar statement for $EG^{\delta(C,z)-1}_{C,z}$.
Recall the following fact used in the sequel: By \cite[Theorem 1.1]{Sch} the closure of each irreducible component of $EG^{\delta(C,z)-1}_{C,z}$ contains $EG_{C,z}$, and a generic element of such a component can be obtained by smoothing a node of an element of $EG^*_{C,z}$.
\begin{lemma}\label{ll2} For the following real substrata $V\subset EG^{\delta(C,z)-1}_{C,z}$, the tangent cones $\widehat T_0\R V$ are linear subspaces of $\R R(C,z)$ of (real) codimension $\qquad\qquad\qquad$ \mbox{$\delta(C,z)-1=\codim \R EG^{\delta(C,z)-1}_{C,z}$}: \begin{enumerate}\item[(i)] $(C,z)$ contains a real singular local branch $(C',z)$, and $V\subset EG^{\delta(C,z)-1}_{C,z}$ is the union of those irreducible components of $EG^{\delta(C,z)-1}_{C,z}$, which contain nodal curves obtained from the curves ${\mathcal C}_\varphi$, $\varphi\in EG^*_{C,z}$, by smoothing out a real node on the component of ${\mathcal C}_\varphi$ corresponding to the local branch $(C',z)$; \item[(ii)] $(C,z)$ contains a pair of complex conjugate local branches $(C',z)$, $(C'',z)$, and $V\subset EG^{\delta(C,z)-1}_{C,z}$ is the union of those irreducible components of $EG^{\delta(C,z)-1}_{C,z}$, which contain nodal curves obtained from the curves ${\mathcal C}_\varphi$, $\varphi\in EG^*_{C,z}$, by smoothing out a real intersection point on the components of ${\mathcal C}_\varphi$ corresponding to the local branches $(C',z)$, $(C'',z)$. \end{enumerate} \end{lemma}
{\bf Proof.} (i) Notice, first, that $V$ can be identified with $EG^{\delta(C',z)-1}_{C',z}\times EG_{C'',z}$, where $(C'',z)$ is the union of the local branches of $(C,z)$ different from $(C',z)$. Hence, we can simply assume that $(C,z)$ is irreducible.
If ${\mathcal C}_\varphi$, $\varphi\in EG^{\delta(C,z)-1}_{C,z}$, has precisely $\delta(C,z)-1$ nodes as its only singularities, then the tangent space $T_\varphi EG^{\delta(C,z)-1}_{C,z}$ can be identified with the space of elements $\psi\in R(C,z)$ vanishing at the nodes of ${\mathcal C}_{\varphi}$. It has codimension $\delta(C,z)-1$, and we have the following bound for the intersection: $$({\mathcal C}_{\psi}\cdot{\mathcal C}_\varphi)\ge2\delta(C,z)-2,\quad \psi\in T_\varphi EG^{-1}_{C,z}\ .$$ Hence, any limit of the tangent spaces $T_\varphi EG^{\delta(C,z)-1}_{C,z}$ as $\varphi\to0$ is contained in the linear space
$$\{\psi\in R(C,z)\ :\ \operatorname{ord}\\psi\big|_{(C,z)}\ge2\delta(C,z)-2\}$$ of codimension at most $\delta(C,z)-1$. By \cite[Proposition 5.8.6]{CA} we have
$$\{\psi\in R(C,z)\ :\ \operatorname{ord}\psi\big|_{(C,z)}\ge2\delta(C,z)-1\}$$ $$\qquad=
\{\psi\in R(C,z)\ :\ \operatorname{ord}\psi\big|_{(C,z)}\ge2\delta(C,z)\}= J^{cond}_{C,z}/{\mathfrak m}_z^N\ .$$ Hence
$$\codim\{\psi\in R(C,z)\ :\ \operatorname{ord}\psi\big|_{(C,z)}\ge2\delta(C,z)-2\}$$ $$\ge\codim J^{cond}_{C,z}/{\mathfrak m}_z^N-1=\delta(C,z)-1\ ,$$ and we are done.
(ii) As in the preceding case, we can assume that $(C,z)=(C',z)\cup(C'',z)$. The above argument yields that the limits of the tangent spaces $T_\varphi\R V$ as $\varphi\in\R V^*$ tends to $0$, are contained in the linear subspace
$$\{\psi\in\R R(C,z)\ :\ \operatorname{ord}\psi\big|_{(C',z)}=\operatorname{ord}\psi\big|_{(C'',z)}\ge2\delta(C',z)+(C'\cdot C'')_z-1\}$$ which then must be of (real) codimension at most $\delta(C,z)-1$. So, it remains to show that the latter codimension equals exactly $\delta(C,z)-1$, and we will prove that the complex codimension of the space
$$\{\psi\in R(C,z)\ :\ \operatorname{ord}\psi\big|_{(C',z)}=\operatorname{ord}\psi\big|_{(C'',z)}\ge2\delta(C',z)+(C'\cdot C'')_z-1\}$$ is at least $\delta(C,z)-1$. Namely, we just impose an extra linear condition and show that the resulting space
$$\Lambda=\{\psi\in R(C,z)\ :\ \operatorname{ord}\psi\big|_{(C',z)}\ge2\delta(C',z)+(C'\cdot C'')_z$$
$$\qquad\qquad\qquad\qquad\operatorname{ord}\psi\big|_{(C'',z)}\ge2\delta(C'',z)+(C'\cdot C'')_z-1\}$$ has codimension $\ge\delta(C,z)$. Write $f=f'f''$, where $f'=0$ and $f''=0$ are equations of $(C',z)$, $(C'',z)$, respectively. By the Noether's theorem in the form of \cite[Theorem II.2.1.26]{GLS2}, any $\psi\in\Lambda$ can be represented as $\psi=af'+bf''$, where $a,b\in R(C,z)$ and
$$\operatorname{ord} a\big|_{(C'',z)}\ge2\delta(C'',z)-1,\quad \operatorname{ord} b\big|_{(C',z)}\ge2\delta(C',z)\ .$$ Again by \cite[Proposition 5.8.6]{CA}, the former inequality yields
$$\operatorname{ord} a\big|_{(C'',z)}\ge2\delta(C'',z)\ ,$$ which finally implies that $\Lambda\subset J^{cond}_{C,z}/{\mathfrak m}_z^N$, and hence $\codim\Lambda\ge\delta(C,z)$.
$\Box$
\begin{proposition}\label{lp2} Let $V\subset EG^{\delta(C,z)-1}_{C,z}$ satisfy the hypotheses of one of the cases in Lemma \ref{ll2}. Then $W(C,z,V,L)$ does not depend on the choice of the real affine space $L$ as in Definition \ref{ld1}. \end{proposition}
{\bf Proof.} We closely follow the argument in the proof of Proposition \ref{lp1}. The classification of codimension one substrata of $V$ contains the cases (n1)-(n3) as in the proof of proposition \ref{lp1}, and one additional case: \begin{enumerate}\item[(n4)] the substratum is $EG_{C,z}$ (i.e., its generic element $\varphi$ has $\delta(C,z)$ nodes). \end{enumerate} The analysis of the cases (a)-(c) literally coincides with that in the proof of Proposition \ref{lp1}. In case (d), the germ of $\R V$ at $\varphi$ consists of $k$ pairwise transversal smooth real germs of codimension $\delta(C,z)-1$ in $\R R(C,z)$, where $k$ is the number of such real nodes $p$ of the curve
${\mathcal C}_\varphi$ that the smoothing of $p$ yields an element of $\R V$ (depending on $V$ as defined in Lemma \ref{ll2}). For any smooth germ $M$ in this union, the intersection of $L(t)\cap M$, $0<|t-\hat t|<\eta$, yields a curve ${\mathcal C}_\psi$ whose Welschinger sign depends only on the real nodes of ${\mathcal C}_\varphi$ different from $p$, and hence does not depend on $t$.
$\Box$
By Lemma \ref{ll1}(3), the tangent cone $\widehat T_0{\mathcal D}_{C,z}$ is a hyperplane. As in the preceding case, this yields
\begin{proposition}\label{lp4} Given a real singularity $(C,z)$, the number $\qquad\qquad$ $W^{discr}(C,z,L):=W(C,z,{\mathcal D}_{C,z},L)$ does not depend on the choice of a real line $L$. \end{proposition}
The proof literally follows the argument in the proof of Propositions \ref{lp1} and \ref{lp2}.
\section{Singular Welschinger invariants associated with $EC^k_{C,z}$}\label{lsec2}
We start with the equiclassical stratum $EC_{C,z}$, which is the most interesting.
\begin{proposition}\label{lp3} (1) Given a real singularity $(C,z)$, the number $W^{ec}(C,z,L)$ does not depend on the choice of $L$.
(2) The number $W^{ec}(C,z)$ is an invariant of a real equisingular deformation class. That is, if $(C_t,z)_{t\in[0,1]}$ is an equisingular family of real singularities, then $W^{ec}(C_0,z)=W^{ec}(C_1,z)$.
(3) Let $(C,z)=\bigcup_{i}(C_i,z)$ be the decomposition of a real singularity $(C,z)$ into irreducible over $\R$ components. Then $W^{ec}(C,z)=\prod_iW^{ec}(C_i,z)$. \end{proposition}
{\bf Proof.} Again the proof follows the argument in the proof of Proposition \ref{lp1}. So, we accept the initial setting and the notations in the proof of Proposition \ref{lp1}. Then we study the wall-crossings that correspond to codimension one substrata in $EC_{C,z}$. If $\varphi\in EC_{C,z}$ is a general element of a codimension one substratum, then \begin{enumerate}\item[(n1')] either ${\mathcal C}_\varphi$ has $3\delta(C,z)-\kappa(C,z)-1$ nodes and $\kappa(C,z)-2\delta(C,z)+1$ cusps, \item[(n2')] or ${\mathcal C}_\varphi$ has $3\delta(C,z)-\kappa(C,z)-2$ nodes, $\kappa(C,z)-2\delta(C,z)$ cusps, and one tacnode $A_3$, \item[(n3')] or ${\mathcal C}_\varphi$ has $3\delta(C,z)-\kappa(C,z)-3$ nodes, $\kappa(C,z)-2\delta(C,z)$ cusps, and one triple point $D_4$, \item[(c1')] or ${\mathcal C}_\varphi$ has $3\delta(C,z)-\kappa(C,z)-1$ nodes, $\kappa(C,z)-2\delta(C,z)-1$ cusps, and one singularity $A_4$, \item[(c2')] or ${\mathcal C}_\varphi$ has $3\delta(C,z)-\kappa(C,z)-2$ nodes, $\kappa(C,z)-2\delta(C,z)-1$ cusps, and one singularity $D_5$, \item[(c3')] or ${\mathcal C}_\varphi$ has $3\delta(C,z)-\kappa(C,z)-1$ nodes, $\kappa(C,z)-2\delta(C,z)-2$ cusps, and one singularity $E_6$. \end{enumerate}
First, we notice that the wall-crossings of types (n1'), (n2'), (n3') are completely similar to the wall-crossing (n1), (n2), (n3), respectively, considered in the proof of Proposition \ref{lp1}, since they involve only the nodal part of the singularities of degenerating elements of $\R EC^*_{C,z}$. Hence, the constancy of $W^{ec}(C,z,L(t))$, $|t-t^*|<\eta$, follows in the same way.
Next we explain why (c1'), (c2'), (c3') are the only codimension one substrata of $EC_{C,z}$ that involve cusps of the degenerating elements of $\R EC^*_{C,z}$. To this end, we show that, any other collection of singularities of ${\mathcal C}_\varphi$ can be deformed into $3\delta(C,z)-\kappa(C,z)$ nodes and $\kappa(C,z)-2\delta(C,z)$ cusps in two successive non-equisingular deformations. By our assumption, at least one of the non-nodal-cuspidal singularities of ${\mathcal C}_\varphi$ must contain a singular local branch. Thus, \begin{itemize}\item if ${\mathcal C}_\varphi$ has at least two non-nodal-cuspidal singularity, we, first, deform one such singularity into nodes and cusps (along its equiclassical deformation), then all other singularities; \item if the non-nodal-cuspidal singularity of ${\mathcal C}_\varphi$ has at least three local branches (one of which denoted $P$ is singular), we, first, shift away a branch, different from $P$, then equiclassically deform the obtained curve into a nodal-cuspidal one;
\item if the non-nodal-cuspidal singularity of ${\mathcal C}_\varphi$ has two singular branches $P_1,P_2$, we, first, shift $P_2$ so that $P_2$ remains centered at a smooth point of $P_1$, then equiclassically deform the obtained curve into a nodal-cuspidal one;
\item if the non-nodal-cuspidal singularity of ${\mathcal C}_\varphi$ has two branches, $P_1$ smooth and $P_2$ singular, which is different from an ordinary cusp, then we, first, equiclassically deform the local branch $P_2$ into nodes and (necessarily appearing) cusps, while keeping one cusp centered on $P_1$, then deform the obtained triple singularity into
nodes and one cusp;
\item if the non-nodal-cuspidal singularity of ${\mathcal C}_\varphi$ has two branches, $P_1$ smooth and $P_2$ singular of type $A_2$, which is tangent to $P_1$, then we, first, rotate $P_1$ so that it becomes transversal to $P_2$, then deform the obtained singularity $D_5$ into two nodes and one cusp;
\item if the non-nodal-cuspidal singularity of ${\mathcal C}_\varphi$ is unibranch either of multiplicity $m\ge3$ and not of the topological type $y^m+x^{m+1}=0$, or of multiplicity $2$ and not of type $A_4$, then we, first, equigenerically deform this singularity into some nodes and a singularity of topological type $y^m+x^{m+1}=0$, if $m\ge3$, or a singularity $A_4$, if $m=2$ (this can be done by the blow-up construction
as in the proof of \cite[Theorem 1]{AC}, see also \cite[Section 2.1]{LS}), then equiclassically deform the obtained curve into a nodal-cuspidal one;
\item if the non-nodal-cuspidal singularity of ${\mathcal C}_\varphi$ is of the topological type $y^m+x^{m+1}=0$, $m\ge4$, then the codimension of the its equisingular stratum in a versal deformation base equals $\frac{m^2+3m}{2}-3$, while the codimension of the equiclassical stratum equal
$$\kappa(\{y^m+x^{m+1}=0\})-\delta(\{y^m+x^{m+1}=0\})=\frac{m^2+m}{2}-1$$
$$=\left(\frac{m^2+3m}{2}-3\right)-(m-2)\le\left(\frac{m^2+3m}{2}-3\right)-2\ .$$
\end{itemize}
Now we analyze the wall-crossings of type (c1'), (c2'), and (c3') as described above.
In case (c1'), the miniversal unfolding of an $A_4$ singularity $y^2=x^5$ is given by the family $y^2=x^5+a_3x^3+a_2x^2+a_1x+a_0$ with the base $B=\{(a_0,...,a_3)\in(\mathbb{C}^4,0)\}$, while the equiclassical locus
$EC\subset B$ is a curve given by $y^2=(x-2t)^3(x+3t)^2$, $t\in(\mathbb{C},0)$. This curve has an ordinary cusp at the origin. The natural projection of the germ of $B(C,z)$ at ${\mathcal C}_\varphi$ onto $B$ takes the affine spaces $L(t)$, $|t-t^*|<\eta$, to real three-dimensional affine spaces transversal to the tangent line to
$EC$ at the origin. Similarly to the case (n1) in the proof of Proposition \ref{lp1}, in the considered bifurcation, two real intersections with $EC$, one corresponding to a curve with a cusp and a hyperbolic node and the other corresponding to a curve with a cusp and an elliptic node, turns in the wall-crossing into two complex conjugate intersections, and hence the constancy of $W^{ec}(C,z,L(t))$, $|t-t^*|<\eta$, follows.
In case (c2'), the equiclassical locus in a miniversal deformation base of a singularity $D_5$ given, say, by $x(y^2-x^3)=0$ is smooth and can be described by a family $(x-t)(y^2-x^3)=0$. So, in the considered wall-crossing a real curve with a cusp and two hyperbolic nodes turns into a curve with a cusp and two complex conjugate nodes, and hence the constancy of $W^{ec}(C,z,L(t))$, $|t-t^*|<\eta$, follows.
In case (c3'), again the equiclassical locus in a miniversal deformation base of of a singularity $E_6$
is smooth (cf. \cite[Theorem 27]{Di}) and one-dimensional. It is not difficult to show that one half branch of $\R EC(E_6)$ parameterizes curves with two real cusps and one hyperbolic node, while the other half branch parameterizes curves with two complex conjugate cusps and one elliptic node. Thus, the constancy of $W^{ec}(C,z,L(t))$, $|t-t^*|<\eta$, follows.
$\Box$
The other loci $EC^k_{C,z}$, $1\le k<\kappa(C,z)-2\delta(C,z)$, may be reducible. Assume that $(C,z)=(C_1,z)\cup...\cup(C_s,z)$ is the splitting into irreducible (over $\mathbb{C}$) components. Given a partition $\overline k=(k_1,...,k_s)$ such that \begin{equation}k_1+...+k_s=k,\quad0\le k_i\le\kappa(C_i,z)-2\delta(C_i,z),\ i=1,...,s, \label{le4}\end{equation} we define the substratum $EC^{\overline k}_{C,z}\subset EC^k_{C,z}$, which is the union of those irreducible components of $EC^k_{C,z}$ whose generic elements $\varphi$ are such that ${\mathcal C}_\varphi={\mathcal C}_{1,\varphi}\cup...\cup{\mathcal C}_{s,\varphi}$ with ${\mathcal C}_{i,\varphi}\in EC^{k_i}_{C_i,z}$, $i=1,...,s$.
\begin{lemma}\label{ll5} In the above notation, the tangent cone $\widehat T_0EC^{\overline k}_{C,z}$ is a linear subspace of $R(C,z)$ of codimension $k+\delta(C,z)=\codim EC^{\overline k}_{C,z}$. \end{lemma}
{\bf Proof.} It is sufficient to treat the case of an irreducible singularity $(C,z)$. Let $\varphi$ be a generic element of a component of $EC^k_{C,z}$. The tangent space $T_\varphi EC^k_{C,z}$ at $\varphi$ can be identified with the space $$\{\psi\in R(C,z)\ :\ \psi(\Sing({\mathcal C}_\varphi))=0,\qquad\qquad\qquad$$
$$\qquad\qquad\qquad\operatorname{ord}\psi\big|_P\ge3\ \text{for each cuspidal local branch}\ P\},$$ and hence the limit of each sequence of tangent spaces $T_\varphi EC^k_{C,z}$ as $\varphi\to0$ is contained in the linear space
$$\{\psi\in R(C,z)\ :\ \operatorname{ord}\psi\big|_{C,z}\ge2\delta(C,z)+k\}\ .$$ It remails to notice that
$$\codim\{\psi\in R(C,z)\ :\ \operatorname{ord}\psi\big|_{C,z}\ge2\delta(C,z)+k\}=\delta(C,z)+k\ .$$ The latter follows, for instance, from \cite[Propositions 5.8.6 and 5.8.7]{CA}.
$\Box$
As a corollary we obtain
\begin{proposition}\label{lp6} Given a real singularity $(C,z)$ splitting into irreducible (over $\mathbb{C}$) irreducible components $(C_i,z)$, $i=1,...,s$, and a sequence $\overline k=(k_1,...,k_s)$ satisfying (\ref{le4}) and an extra condition $k_i=k_j$ as long as $(C_i,z)$ and $C_j,z)$ are complex conjugate, the locus $EC^{\overline k}_{C,z}$ is real, and the number $W(C,z,EC^{\overline k}_{C,z},L)$ does not depend on the choice of $L$. \end{proposition}
The proof literally coincides with the proof of Proposition \ref{lp3}.
\section{Example: singularities of type $A_n$}\label{lsec4}
A complex singularity of type $A_n$ is analytically isomorphic to the canonical one $\{y^2-x^{n+1}=0\}\subset(\mathbb{C}^2,0)$, and its miniversal deformation can be chosen to be $$\left\{y^2-x^{n+1}-\sum_{i=0}^{n-1}a_ix^i=0\right\}_{a_0,...,a_{n-1}\in(\mathbb{C},0)}$$ with the base $B(A_n)=\{(a_0,...,a_{n-1})\in(\mathbb{C}^n,0)\}$.
\begin{lemma}\label{ll6} (1) For any $n\ge1$, and $1\le i\le\delta(A_n)=\left[\frac{n+1}{2}\right]$, \begin{equation}\widehat T_0EG^i_{A_n}=\{a_0=...=a_{i-1}=0\}\subset B(A_n)\label{ne10}\end{equation} the linear subspace of codimension $i=\codim EG^i_{A_n}$.
(2) If $n$ is odd, then $EC_{A_n}=EG_{A_n}$. If $n$ is even, than $$\widehat T_0EC_{A_n}=\{a_0=...=a_k=0\}\subset B(A_n)\ .$$ \end{lemma}
{\bf Proof.} Let $(C,z)$ be a canonical singularity of type $A_n$. The tangent space to $EG^i_{C,z}$ at a generic element $\varphi$ consists of $\psi\in B(C,z)$ such that ${\mathcal C}_\psi$ passes through all $i$ nodes of ${\mathcal C}_\varphi$, and hence, $({\mathcal C}_\psi\cdot{\mathcal C}_\varphi)_{D(C,z)}\ge2i$. It follows that the limit of any sequence of these tangent spaces as $\varphi\to0$ is contained in the linear space $\{\psi\in B(C,z)\ :\ ({\mathcal C}_\psi\cdot C)_z\ge2i\}$, which one can easily identify with the space in the right-hand side of (\ref{ne10}). So, the first claim of the lemma follows for the dimension reason. The same argument settles the second claim.
$\Box$
\begin{proposition}\label{lp5} For any $n\ge1$ and $k\ge1$, we have $$\mt EG^i_{A_n}=\binom{n+1-i}{i},\quad\text{for all}\quad i=1,...,\delta(A_n)=\left[\frac{n+1}{2}\right]\ ,$$ $$\text{and}\quad\mt EC(A_{2k})=k\ .$$ \end{proposition}
\begin{remark}\label{nr1} The multiplicities $\mt EG^i_{A_n}$ were computed in \cite[Section 5, page 540]{She}. Here we provide another, more explicit computation, which will be used below for computing singular Welschinger invariants. \end{remark}
{\bf Proof.} (1) If $n+1=2i$, then $EG^i_{A_n}=EG(A_n)=EC(A_n)$ is smooth; hence, the multiplicity equals $1$. Thus, suppose that $n+1>2i$. By Lemma \ref{ll6}(1), the question on $\mt EG^i_{A_n}$ reduces to the following one: How many polynomials $P(x)$ of degree $\le i-1$ satisfy the condition \begin{equation}x^{n+1}+x^i+P(x)=Q(x)^2R(x)\ ,\label{ne30}\end{equation} where $Q,R$ are monic polynomials of degree $i$, $n+1-2i$, respectively?
Combining relation (\ref{ne30}) with its derivative, we obtain $$(n+1-i)x^i+((n+1)P-xP')=\left((n+1)QR-2xQ'R-xQR'\right)Q\ ,$$ which immediately yields \begin{equation}(n+1)QR-2xQ'R-xQR'=n+1-i\ .\label{ne31}\end{equation} Substituting $$Q(x)=x^i+\sum_{j=1}^i\alpha_jx^{i-j},\quad R(x)=x^{n+1-2i}+\sum_{j=1}^{n+1-2i}\beta_jx^{n_1-2i-j}$$ into (\ref{ne31}), we obtain that the terms of the top degree $n+1-i$ cancel out, while the coefficients of $x^m$, $m=0,...,n-i$, yield the system of equations \begin{equation}\begin{cases}&2\alpha_1+\beta_1=0,\\ &2j\alpha_j+j\beta_j+\sum_{0<m<j}c_{jm}\alpha_{j-m}\beta_m=0,\quad j=2,...,n-i,\\ &(n+1)\alpha_i\beta_{n+1-2i}=n+1-i,\end{cases}\label{ne32}\end{equation} where we assume $\alpha_j=0$ as $j>i$ and $\beta_j=0$ as $j>n+1-2i$.
Suppose that $\deg Q=i\ge\deg R=n+1-2i$. From the $(n+1-2i)$ first equations in (\ref{ne32}) we express $\beta_j$ as a polynomial in $\alpha_1,...,\alpha_i$ of homogeneity degree $j$, while $\alpha_m$ has weight $m$, for all $j=1,...,n+1-2i$. Substituting these expressions into the other equations, we obtain a system of $i$ equations in $\alpha_1,...,\alpha_i$ of homogeneity degrees $n+2-2i,...,n+1-i$, respectively. Thus, (cf. the computation in \cite[Section G, Example 1]{FGS})\emph{} the number of solutions (counted with multiplicities) appears to be $$\frac{(n+2-2i)\cdot...\cdot(n+1-i)}{i!}=\binom{n+1-i}{i}$$ as required. In the same way we treat the case when $\deg Q=i\le\deg R=n+1-2i$.
(2) For $n=2k$, by Lemma \ref{ll6}(2), the question on $\mt EC(A_{2k})$ reduces to the following one: How many polynomials $P(x)$ of degree $k$ satisfy the condition \begin{equation}x^{2k+1}+x^{k+1}+P(x)=Q(x)^2(x+\beta)^3\ ,\label{ne33}\end{equation} where $Q(x)$ is a monic polynomial of degree $k-1$?
The preceding argument subsequently gives an equation $$(2k+1)(x+\beta)Q-3xQ-2Q'(x+\beta)=k$$ with $Q(x)=x^{k-1}+\sum_{j=1}^{k-1}\alpha_jx^{k-1-j}$, which develops into the system \begin{equation}\begin{cases}&2\alpha_1+3\beta=0,\\ &(2j+2)\alpha_j+(2j+3)\alpha_{j-1}\beta=0,\quad j=2,...,k-1,\\ &(2k+1)\alpha_{k-1}\beta=k+1,\end{cases}\label{ne34}\end{equation} admitting a simplification of the form $$\alpha_j=\nu_j\beta^j,\ j=1,...,k-1,\quad (2k+1)\nu_{k-1}\beta^k=k+1$$ with some $\nu_1,...,\nu_{k-1}\in\Q$. So, we finally obtain $k$ solutions as required.
$\Box$
Now we pass to the real setting. The complex singularity of type $A_n$ has a unique real form $y^2=x^{2k+1}$ if $n=2k$, and has two real forms $y^2=x^{2k}$ and $y^2=-x^{2k}$ (denoted by $A_{2k-1}^h$ and $A_{2k-1}^e$, respectively) if $n=2k-1$.
\begin{lemma}\label{nl2} (1) For all $k\ge1$ and $i=1,...,k$, there exist singular Welschinger invariants \begin{equation}W\left(A^h_{2k-1},EG^i_{A^h_{2k-1}}\right),\ W\left(A^e_{2k-1},EG^i_{A^e_{2k-1}}\right),\ \text{and}\ W\left(A_{2k},EG^i_{A_{2k}}\right)\ .\label{ne20}\end{equation} (2) Furthermore, $$W^{eg}(A_{2k-1}^e)=(-1)^k,\quad W^{eg}(A_{2k-1}^h)=1\ ,$$ $$W^{eg}(A_{2k})=\begin{cases}0,\quad &k\equiv1\mod2,\\ 1,\quad &k\equiv0\mod2,\end{cases}$$ $$W^{ec}(A_{2k})=\begin{cases}0,\quad & k\equiv0\mod2,\\ 1,\quad & k\equiv1\mod2.\end{cases}$$ \end{lemma}
{\bf Proof.} The existence of the invariants (\ref{ne20}) follows from Lemma \ref{ll6} and the argument used in the proof of Propositions \ref{lp1} and \ref{lp2}.
Since $\mt EG(A_{2k-1})=1$, we have $W^{eg}=\pm1$ for $A^h_{2k-1}$ and $A^e_{2k-1}$. More precisely, an equigeneric nodal deformation of $A^h_{2k-1}$ has the form $y^2-Q(x)^2=0$, $\deg Q=k$, and hence it has only hyperbolic real nodes, i.e., $W^{eg}(A^h_{2k-1})=1$, while an equigeneric nodal deformation of $A^e_{2k-1}$ has the form $y^2+Q(x)^2=0$, $\deg Q=k$, and hence it has only elliptic real nodes, whose number is of the same parity as $k$, i.e., $W^{eg}(A^e_{2k-1})=(-1)^k$.
Consider singularities $A_{2k}$. For $EG(A_{2k})=EG^k_{A_{2k}}$, system (\ref{ne32}) takes the form $$\begin{cases}&2\alpha_1+\beta_1=0,\\ &2j\alpha_j+(2j-1)\alpha_{j-1}\beta_1=0,\quad j=2,...,k,\\ &(2k+1)\alpha_k\beta_1=k+1,\end{cases}$$ which yields $$\alpha_j=\lambda_j\beta_1^j,\ (-1)^j\lambda_j>0,\ j=1,...,k,\quad \lambda_k\beta^{k+1}=\frac{k+1}{2k+1}\ .$$ So, if $k$ is odd, we have no real solutions, and hence $W^{eg}(A_{2k})=0$. If $k$ is even, than we have a unique real solution such that $\beta_1>0$ and $(-1)^j\alpha_j>0$. That is, $Q(x)$ has only positive real roots (if any), and hence the curve $y^2-(x+\beta_1)Q(x)^2=0$ has only hyperbolic real nodes, i.e., $W^{eg}(A_{2k})=1$.
In the same manner we analyze system (\ref{ne34}) and obtain the values of $W^{ec}(A_{2k})$ as stated in the lemma.
$\Box$
\begin{remark}\label{nr2} (1) The problem of computation of the invariants $W^{eg}$ and $W^{ec}$ for arbitrary real singularities (even for quasihomogeneous singularities) remains widely open. A possible relation to enumerative invariants of (global) plane algebraic curves could be a key to this problem.
(2) The values of $W^{eg}$ and $W^{ec}$ for $A_n$-singularities are $0$ or $\pm1$. The same can be showed for other simple singularities. Is it true for an arbitrary real singularity? \end{remark}
\end{document} | arXiv |
\begin{document}
\title{Left invariant special K\"ahler structures}
\begin{abstract} We construct left invariant special K\"ahler structures on the cotangent bundle of a flat pseudo-Riemannian Lie group. We introduce the twisted cartesian product of two special K\"ahler Lie algebras according to two linear representations by infinitesimal K\"ahler transformations. We also exhibit a double extension process of a special K\"ahler Lie algebra which allows us to get all simply connected special K\"ahler Lie groups with bi-invariant symplectic connections. All Lie groups constructed by performing this double extension process can be identified with a subgroup of symplectic (or K\"ahler) affine transformations of its Lie algebra containing a nontrivial $1$-parameter subgroup formed by central translations. We show a characterization of left invariant flat special K\"ahler structures using \'etale K\"ahler affine representations, exhibit some immediate consequences of the constructions mentioned above, and give several non-trivial examples. \end{abstract}
\tableofcontents \section{Introduction} Throughout this paper we will be dealing with the following geometric object: \begin{definition}\cite{F}\label{MainDefinition}
A \emph{special K\"ahler structure} on a smooth manifold $M$ is a triple $(\omega,J,\nabla)$ where $\omega$ is a symplectic form, $J$ is an integrable almost complex structure, and $\nabla$ is a flat and torsion free connection on $M$ such that:
\begin{enumerate}
\item[$\iota.$] $(M,\omega,J)$ is a pseudo-K\"ahler manifold, that is, $k(X,Y)=\omega(X,JY)$ defines a pseudo-Riemannian metric on $M$,
\item[$\iota\iota.$] $\nabla$ is symplectic with respect to $\omega$, that is, $\nabla \omega=0$ and
\item[$\iota\iota\iota.$] the following formula holds true
\begin{equation}\label{Eq1}
(\nabla_XJ)Y=(\nabla_YJ)X,\qquad X,Y\in\mathfrak{X}(M).
\end{equation}
\end{enumerate}
The quadruple $(M,\omega,J,\nabla)$ will be called a \emph{special K\"ahler manifold}. \end{definition} A flat and torsion free connection $\nabla$ will be called a \emph{flat affine connection}. The fact that $\nabla$ is torsion free implies that identity \eqref{Eq1} is equivalent to require \begin{equation}\label{Eq2} J[X,Y]=\nabla_X(JY)-\nabla_Y(JX),\qquad X,Y\in\mathfrak{X}(M). \end{equation}
Special cases of these kind of structures are pseudo-K\"ahler manifolds for which the Levi--Civita connection $\nabla$ associated to $k$ is flat. The identity $\iota\iota\iota.$ trivially holds because $\nabla J=0$. The converse is also true in the sense that if $\nabla$ is a flat affine symplectic connection and $\nabla J=0$, then $\nabla$ is the Levi-Civita connection associated to $k$ and $M$ is locally isometric to $\mathbb{C}^n$. These kind of manifolds are called \emph{flat special K\"ahler manifolds} and they have been characterized in \cite{BC1}. It is important to notice that there are no non-flat complete special K\"ahler manifolds, that is, if $k$ is complete then the Levi--Civita connection associated to it is flat; see \cite{L}.
The notion of special K\"ahler manifold initially appeared in physics and it has its origins in certain supersymmetric field theories \cite{dWVP}. More specifically, affine special K\"ahler manifolds are exactly the allowed targets for the scalars of the vector multiplets of field theories with $N = 2$ rigid supersymmetry on 4-dimensional Minkowski space-time. Also, there exists a tight mathematical relationship between special real manifolds, which come from Hessian geometry and were introduced in \cite{AC}, and special K\"ahler manifolds. This relationship is given through the intrinsic description of an $r$-map which, on the physics side, corresponds to the dimensional reduction of rigid vector multiplets from 5 to 4 space-time dimensions; see \cite{AC}. Two important features of special K\"ahler manifolds are that their cotangent bundle carries the structure of a hyper-K\"ahler manifold and they are base of algebraic completely integrable systems; see \cite{F}. Some properties and the review of several interesting applications in physics and mathematics where special K\"ahler manifolds appear can be found in \cite{C}.
In this paper we mainly focus in giving three methods for constructing left invariant special K\"ahler structures on simply connected Lie groups. These kind of geometric objects will be called \emph{special K\"ahler Lie groups}. Accordingly, the infinitesimal objects associated to these kind of Lie groups will be called a \emph{special K\"ahler Lie algebras}.
Firstly, we get left invariant special K\"ahler structures on the cotangent bundle of a simply connected flat pseudo-Riemannian Lie group verifying the condition $\nabla^H J=0$. Here $\nabla^H$ denotes the Hess connection associated to the natural left invariant bi-Lagrangian transverse foliations that admits the cotangent bundle of any simply connected flat affine Lie group. We exhibit the conditions needed to ensure when this connection is geodesically complete. The collection of groups obtained by using this method are examples of flat special K\"ahler manifolds.
Secondly, we introduce the twisted cartesian product of two special special K\"ahler Lie algebras according to two Lie algebra representations by infinitesimal K\"ahler transformations. This method allows us to obtain examples of non-trivial left invariant special K\"ahler structures in every even dimension $\geq 4$ since the only example in dimension $2$ is the trivial one, namely, $(\mathbb{R}^2,\omega_0,J_0,\nabla^0)$. We prove that every special K\"ahler algebra that admits a complex and non-degenerate left ideal can be obtained as the twisted cartesian product of two natural special K\"ahler Lie subalgebras.
Thirdly, we give a double extension process of a special K\"ahler Lie algebra via a real line and according to an infinitesimal linear symplectomorphism which defines a derivation of a left symmetric algebra and commutes with the complex structure. This double extension process gives us all simply connected special K\"ahler Lie groups with bi-invariant symplectic connections. Moreover, all Lie groups constructed as a double extension can be identified with a subgroup of symplectic (or K\"ahler) affine transformations of its Lie algebra containing a nontrivial $1$-parameter subgroup formed by central translations. We end the paper by exhibiting a $1$-dimensional family of left invariant special K\"ahler structures parametrized by $\mathbb{R}$ in dimension $6$ with associated metric having signature $(4,2)$ and verifying $\nabla J\neq 0$.
We also show a characterization of left invariant flat special K\"ahler structures, give several non-trivial examples, and show some immediate consequences of the constructions mentioned above. \section{Left invariant special K\"ahler structures and some examples} For a general understanding of the basic concepts about symplectic Lie groups, left symmetric algebras, special K\"ahler manifolds, and their related topics, the reader is recommended to visit for instance the references \cite{BC2,Ba,Bu,C,K,M,V}. Let us assume that $M=G$ is a connected Lie group with Lie algebra $\mathfrak{g}$ and that all the geometric object that we are dealing with are left invariant, that is, the left multiplications in $G$ determine: symplectomorphisms of $(G,\omega)$, affine transformations of $(G,\nabla)$, and holomorphic maps of $(G,J)$. \begin{definition} A connected Lie group $G$ is called a \emph{special K\"ahler Lie group} if it can be equipped with a special K\"ahler structure $(\omega,J,\nabla)$ where $\omega$, $J$, and $\nabla$ are left invariant. \end{definition}
Note that the pseudo-Riemannian metric $k$ on $G$ induced by $(\omega,J)$ is necessarily left invariant. The infinitesimal object associated to a special K\"ahler Lie group is the following: \begin{definition}\label{definitionLie} A real finite dimensional Lie algebra $\mathfrak{g}$ is called a \emph{special K\"ahler Lie algebra} if it can be equipped with a triple $(\omega,j,\cdot)$ where $\omega\in\wedge^2\mathfrak{g}^\ast$ is a non-degenerate scalar 2-cocycle, $j:\mathfrak{g}\to \mathfrak{g}$ is an integrable complex structure, and $\cdot:\mathfrak{g}\times \mathfrak{g}\to \mathfrak{g}$ is a left symmetric product such that for all $x,y,z\in \mathfrak{g}$ we have: \begin{enumerate} \item[$\iota.$] $(\mathfrak{g},\omega,j)$ is a pseudo-K\"ahler Lie algebra, \item[$\iota\iota.$] $[x,y]=x\cdot y-y\cdot x$, \item[$\iota\iota\iota.$] $\omega(x\cdot y,z)+\omega(y,x\cdot z)=0$, and \item[$\iota\nu.$] $j\in Z^1_L(\mathfrak{g},\mathfrak{g})$. \end{enumerate} Here $Z^1_L(\mathfrak{g},\mathfrak{g})$ denotes the space of Lie algebra 1-cocycles with respect to the linear representation $L:\mathfrak{g}\to \mathfrak{gl}(\mathfrak{g})$ defined by $L_x(y):=x\cdot y$. \end{definition}
For the purpose of this paper will be important to have the following formulas and definitions in mind: \begin{itemize} \item $\omega$ is a scalar 2-cocycle if it verifies the formula $$\oint\omega([x,y],z)=\omega([x,y],z)+\omega([y,z],x)+\omega([z,x],y)=0,\qquad x,y,z\in \mathfrak{g}.$$ \item $\cdot:\mathfrak{g}\times \mathfrak{g}\to \mathfrak{g}$ is a left symmetric product on $\mathfrak{g}$ if it satisfies $$x\cdot(y\cdot z)-(x\cdot y)\cdot z = y\cdot(x\cdot z)-(y\cdot x)\cdot z,\qquad x,y,z\in\mathfrak{g}.$$ If $(x,y,z):=x\cdot(y\cdot z)-(x\cdot y)\cdot z$ is the associator of $x$, $y$, and $z$ in $\mathfrak{g}$, the last identity means that $(x,y,z)=(y,x,z)$. \item $j\in Z^1_L(\mathfrak{g},\mathfrak{g})$ if $$j([x,y])=L_x(j(y))-L_y(j(x)),\qquad x,y\in\mathfrak{g}.$$ \item As $(\omega,j)$ defines a pseudo-K\"ahler structure on $\mathfrak{g}$ the formula $$k(x,y)=\omega(x,j(y)),\qquad x,y\in\mathfrak{g}$$ defines a non-degenerate symmetric bilinear form on $\mathfrak{g}$, that is, a scalar product. \end{itemize} \begin{remark} The data $(\mathfrak{g},\omega,\cdot)$ verifying the formulas $\iota\iota.$ and $\iota\iota\iota.$ from Definition \ref{definitionLie} is called a \emph{flat affine symplectic Lie algebra}. We will use the constructions introduced in \cite{Au,V} of these kind of objects for getting examples of left invariant special K\"ahler structures. \end{remark}
The following result is clear: \begin{lemma} There exists a bijective correspondence between simply connected special K\"ahler Lie groups and special K\"ahler Lie algebras. \end{lemma}
The model space of special K\"ahler Lie group is $((\mathbb{R}^{2n},+),\omega_0,J_0,\nabla^0)$ where $\omega_0$, $J_0$, and $\nabla^0$ are the canonical symplectic form, complex structure, and covariant derivative on $\mathbb{R}^{2n}$, respectively. \begin{lemma}\label{dimension} If $(G,\omega,J,\nabla)$ is a non-Abelian special K\"ahler Lie group, then $\textnormal{dim}G\geq 4$. \end{lemma} The previous Lemma is consequence of the following examples. Let $x^+$ denote the left invariant vector field associated to an element $x\in\mathfrak{g}$. The classification of left invariant flat affine symplectic connections in dimension $2$ can be found in \cite{An}. \begin{example} Aside $\nabla^0$, up to isomorphism there is another left invariant flat affine symplectic connection on $((\mathbb{R}^{2},+),\omega_0)$ and it is given by $$\nabla_{e_1^+}e_1^+=e_2^+\qquad\textnormal{and}\qquad \nabla_{e_1^+}e_2^+=\nabla_{e_2^+}e_1^+=\nabla_{e_2^+}e_2^+=0.$$ It is easy to see that $(\nabla_{e_1^+}J_0)e_2^+\neq (\nabla_{e_2^+}J_0)e_1^+$. Thus $(\omega_0,J_0,\nabla)$ does not define a left invariant structure of special K\"ahler Lie group on $\mathbb{R}^2$. \end{example}
\begin{example} Let $G=\textnormal{Aff}(\mathbb{R})_0$ denote the connected component of the identity of the group of affine transformations of the real line with its natural left invariant symplectic form $\omega=\dfrac{1}{x^2}dx\wedge dy$. Up to isomorphism, there are two left invariant flat affine symplectic connections on $(\textnormal{Aff}(\mathbb{R})_0,\omega)$ and these are given by \begin{enumerate} \item[$\iota.$] $\nabla_{e_1^+}e_1^+=-e_1^+\qquad \nabla_{e_1^+}e_2^+=e_2^+\qquad\textnormal{and}\qquad \nabla_{e_2^+}e_1^+=\nabla_{e_2^+}e_2^+=0$, and \item[$\iota\iota.$] $\overline{\nabla}_{e_1^+}e_1^+=-\dfrac{1}{2}e_1^+\qquad \overline{\nabla}_{e_1^+}e_2^+=\dfrac{1}{2} e_2^+\qquad \overline{\nabla}_{e_2^+}e_1^+=-\dfrac{1}{2} e_2^+ \qquad\textnormal{and}\qquad\overline{\nabla}_{e_2^+}e_2^+=0$. \end{enumerate} Here $e_1^+=x\dfrac{\partial}{\partial x}$ and $e_2^+=x\dfrac{\partial}{\partial y}$ determine a basis for the left invariant vector fields on $\textnormal{Aff}(\mathbb{R})_0$. The natural left invariant complex structure on $\textnormal{Aff}(\mathbb{R})_0$ is defined as $J(e_1^+)=e_2^+$. This is clearly integrable and moreover $$(\nabla_{e_1^+}J)e_2^+\neq (\nabla_{e_2^+}J)e_1^+ \qquad\textnormal{and}\qquad (\overline{\nabla}_{e_1^+}J)e_2^+\neq (\overline{\nabla}_{e_2^+}J)e_1^+.$$ Therefore, $(\omega,J,\nabla)$ and $(\omega,J,\overline{\nabla})$ do not define left invariant special K\"ahler structures on $\textnormal{Aff}(\mathbb{R})_0$. \end{example} The following are three positive examples of left invariant special K\"ahler structures: \begin{example}[Dimension 4]\label{ExampleD4} Consider the Lie group $G_1=\mathbb{R}\ltimes_\rho \mathbb{R}^3$ determined by the semi-direct product of $(\mathbb{R},+)$ with $(\mathbb{R}^3,+)$ by means of the Lie group homomorphism $\rho:\mathbb{R}\to \textnormal{GL}(\mathbb{R}^3)$ defined by $$\rho(t)=\left(\begin{array}{ccc} e^t & 0 &0\\ 0 & e^t & 0 \\ 0 & 0 & e^{-t} \end{array} \right).$$ The product in $G_1$ is explicitly given as $$(t,x,y,z)\cdot(t',x',y',z')=(t+t',e^tx'+x,e^ty'+y,e^{-t}z'+z).$$ A basis for the left invariant vector fields on $G_1$ is formed by $$e_1^+=\dfrac{\partial}{\partial t},\qquad e_2^+=e^t\dfrac{\partial}{\partial x},\qquad e_3^+=e^t\dfrac{\partial}{\partial y},\qquad e_4^+=e^{-t}\dfrac{\partial}{\partial z}.$$ Thus, the Lie algebra of $G_1$ is isomorphic to the vector space $\mathfrak{g}_1\cong \textnormal{Vect}_\mathbb{R}\lbrace e_1,e_2,e_3,e_4\rbrace$ with nonzero Lie brackets $$[e_1,e_2]=e_2,\qquad [e_1,e_3]=e_3,\qquad\textnormal{and}\qquad[e_1,e_4]=-e_4.$$ The following data $(\omega,J,\nabla)$ defines a structure of special K\"ahler Lie group on $G_1$: \begin{enumerate} \item[$\iota.$] left invariant symplectic form $\omega=e^{-t}dy\wedge dt+dz\wedge dx$, \item[$\iota\iota.$] left invariant complex structure $J(e_3^+)=e_2^+$ and $J(e_4^+)=e_1^+$, and \item[$\iota\iota\iota.$] left invariant flat affine symplectic connection $$\nabla_{e_1^+}e_1^+=-e_1^+\qquad \nabla_{e_1^+}e_2^+=e_2^+\qquad \nabla_{e_1^+}e_3^+=e_3^+\qquad \nabla_{e_1^+}e_4^+=-e_4^+,\qquad\textnormal{and} $$ $$\nabla_{e_2^+}=\nabla_{e_3^+}=\nabla_{e_4^+}=0.$$ \end{enumerate} \begin{remark} The signature of the pseudo-Riemannian metric determined by $(\omega,J)$ is $(2,2)$ and $\textnormal{Aff}(\mathbb{R})_0$ can be identified with a Lagrangian Lie subgroup in $(G_1,\omega)$. Moreover, $\nabla J=0$. \end{remark} \end{example}
\begin{example}[Dimension 6]\label{ExampleD6} Let us now consider the Lie group $G_2=\mathbb{R}\ltimes_\rho \mathbb{R}^5$ determined by the semi-direct product of $(\mathbb{R},+)$ with $(\mathbb{R}^5,+)$ by means of the Lie group homomorphism $\rho:\mathbb{R}\to \textnormal{GL}(\mathbb{R}^5)$ defined by $$\rho(t)=\left(\begin{array}{ccccc} 1 & 0 &0 & 0 & 0\\ -t & 1 &0 & 0 & 0\\ 0 & 0 & 1 & 0 & t\\ 0 & 0 &-t & 1 & -t^2/2\\ 0 & 0 &0 & 0 & 1 \end{array} \right).$$ The product in $G_2$ is explicitly given as $$(t,x,y,z,u,v)\cdot(t',x',y',z',u',v')=\left(t+t',x'+x,-tx'+y'+y,z'+tv'+z,-tz'+u'-t^2/2v'+u,v+v'\right).$$ A basis for the left invariant vector fields on $G_2$ is formed by $$e_1^+=\dfrac{\partial}{\partial x}-t\dfrac{\partial}{\partial y},\qquad e_2^+=\dfrac{\partial}{\partial t},\qquad e_3^+=\dfrac{\partial}{\partial y},\qquad e_4^+=\dfrac{\partial}{\partial z}-t\dfrac{\partial}{\partial u},$$ $$e_5^+=\dfrac{\partial}{\partial u},\qquad e_6^+=t\dfrac{\partial}{\partial z}-t^2/2\dfrac{\partial}{\partial u}+\dfrac{\partial}{\partial v}.$$ Therefore, the Lie algebra of $G_2$ is isomorphic to the vector space $\mathfrak{g}_2\cong \textnormal{Vect}_\mathbb{R}\lbrace e_1,e_2,e_3,e_4,e_5,e_6\rbrace$ with nonzero Lie brackets $$[e_1,e_2]=e_3,\qquad [e_2,e_4]=-e_5,\qquad\textnormal{and}\qquad[e_2,e_6]=e_4.$$ The following data $(\omega,J,\nabla)$ defines a structure of special K\"ahler structure on $G_2$: \begin{enumerate}
\item[$\iota.$] left invariant symplectic form $\omega=tdz\wedge dt +du\wedge dt+t^2/2dt\wedge dv+dz\wedge dx+dv\wedge dy$,
\item[$\iota\iota.$] left invariant complex structure $J(e_4^+)=e_1^+$, $J(e_5^+)=e_3^+$ and $J(e_6^+)=e_2^+$, and
\item[$\iota\iota\iota.$] left invariant flat affine symplectic connection
$$\nabla_{e_2^+}e_1^+=-e_3^+\qquad \nabla_{e_2^+}e_2^+=e_1^+\qquad \nabla_{e_2^+}e_4^+=-e_5^+\qquad \nabla_{e_2^+}e_6^+=e_4^+,\qquad \nabla_{e_2^+}e_3^+=\nabla_{e_2^+}e_5^+=0$$
$$\textnormal{and}\qquad\nabla_{e_1^+}=\nabla_{e_3^+}=\nabla_{e_4^+}=\nabla_{e_5^+}=\nabla_{e_6^+}=0.$$ \end{enumerate} \begin{remark} The signature of the pseudo-Riemmanian metric determined by $(\omega,J)$ is $(2,4)$ and the 3-dimensional Heisenberg group $H_3$ can be identified with a Lagrangian Lie subgroup in $(G_2,\omega)$. Moreover, $\nabla J=0$. \end{remark} \end{example}
The groups constructed in Examples \ref{ExampleD4} and \ref{ExampleD6} are examples of flat special K\"ahler manifolds which have been characterized in \cite{BC1}. We got them using the method to be introduced in next section.
The following is an example of special K\"ahler Lie group in dimension $4$ for which $\nabla J\neq 0$. It will be important for getting other interesting examples. \begin{example}[Dimension $4$ with $\nabla J\neq 0$]\label{KeyExample1} Consider the Lie group $G_3=\mathbb{R}^2\ltimes_\rho \mathbb{R}^2$ determined by the semi-direct product of $(\mathbb{R}^2,+)$ with $(\mathbb{R}^2,+)$ by means of the Lie group homomorphism $\rho:\mathbb{R}^2\to \textnormal{GL}(\mathbb{R}^2)$ defined by $$\rho(t,s)=\left(\begin{array}{cc} t+s+1 & t+s \\ -(t+s) & -(t+s)+1 \end{array} \right).$$ The product in $G_3$ is explicitly given as $$(t,s,x,y)\cdot(t',s',x',y')=(t+t',s+s',(t+s+1)x'+(t+s)y'+x,-(t+s)x'+(1-(t+s))y'+y).$$ A basis for the left invariant vector fields on $G_3$ is formed by $$e_1^+=\dfrac{\partial}{\partial t},\qquad e_2^+=(t+s+1)\dfrac{\partial}{\partial x}-(t+s)\dfrac{\partial}{\partial y}$$ $$e_3^+=\dfrac{\partial}{\partial s},\qquad e_4^+=(t+s)\dfrac{\partial}{\partial x}+(1-(t+s))\dfrac{\partial}{\partial y}.$$ Therefore, the Lie algebra of $G_3$ is isomorphic to the vector space $\mathfrak{g}_3\cong \textnormal{Vect}_\mathbb{R}\lbrace e_1,e_2,e_3,e_4\rbrace$ with nonzero Lie brackets $$[e_1,e_2]=[e_1,e_4]=[e_3,e_2]=[e_3,e_4]=e_2-e_4.$$ The following data $(\omega,J,\nabla)$ defines a structure of special K\"ahler structure on $G_3$: \begin{enumerate}
\item[$\iota.$] left invariant symplectic form $\omega=(1-(t+s))dt\wedge dx+(t+s)(dy\wedge dt+dx\wedge ds)+(1+t+s)dy\wedge ds$,
\item[$\iota\iota.$] left invariant complex structure $J(e_1^+)=e_2^+$ and $J(e_3^+)=e_4^+$, and
\item[$\iota\iota\iota.$] left invariant flat affine symplectic connection
\begin{center}
\begin{tabular}{c|c|c|c|c}
$\nabla$ & $e_1^+$ & $e_2^+$ & $e_3^+$ & $e_4^+$ \\
\hline
$e_1^+$ & $-e_1^++e_3^+$ & $e_2^+-e_4^+$ & $-e_1^++e_3^+$ & $e_2^+-e_4^+$ \\
\hline
$e_2^+$ & $0$ & $2e_1^+-2e_3^+$ & $0$ & $2e_1^+-2e_3^+$ \\
\hline
$e_3^+$ & $-e_1^++e_3^+$ & $e_2^+-e_4^+$ & $-e_1^++e_3^+$ & $e_2^+-e_4^+$ \\
\hline
$e_4^+$ & $0$ & $2e_1^+-2e_3^+$ & $0$ & $2e_1^+-2e_3^+$ \\
\end{tabular}
\end{center} \end{enumerate} \begin{remark}
The signature of the pseudo-Riemmanian metric determined by $(\omega,J)$ is $(2,2)$. Moreover, given that $\nabla_{e_1^+}\circ J\neq J\circ \nabla_{e_1^+}$, we obtain that $\nabla J\neq 0$. \end{remark} \end{example}
\subsection{\'Etale K\"ahler affine representations and the case $\nabla J=0$} A finite dimensional real vector space $(V,\omega,J)$ is called a \emph{K\"ahler vector space} if $(V,\omega)$ is a symplectic vector space and $J:V\to V$ is a linear complex structure on $V$ such that \begin{enumerate} \item[$\iota.$] $\omega(J(x),J(y))=\omega(x,y)$, and \item[$\iota\iota.$] $k(x,y)=\omega(x,J(y))$ is a scalar product on $V$. \end{enumerate}
Let $\textnormal{Sp}(V,\omega)$ and $\textnormal{GL}(V,J)$ denote the groups of linear symplectomorphisms and linear complex transformations of $(V,\omega)$ and $(V,J)$, respectively. We define the group of \emph{linear K\"ahler transformations} of $(V,\omega,J)$ as $$\textnormal{KL}(V,\omega,J):=\textnormal{Sp}(V,\omega)\cap \textnormal{GL}(V,J).$$ This is a Lie group with Lie algebra $\mathfrak{kl}(V,,\omega,J)=\mathfrak{sp}(V,\omega)\cap \mathfrak{gl}(V,J)$ where $\mathfrak{sp}(V,\omega)$ and $\mathfrak{gl}(V,J)$ are the Lie algebras of $\textnormal{Sp}(V,\omega)$ and $\textnormal{GL}(V,J)$, respectively. More precisely, $\mathfrak{kl}(V,,\omega,J)$ is composed by elements $A\in \mathfrak{gl}(V)$ verifying both $$\omega(A(x),y)+\omega(x,A(y))=0\qquad \textnormal{and}\qquad AJ=JA.$$ The group of \emph{K\"ahler affine transformations} of $(V,\omega,J)$ is defined as the semi-direct product $V\rtimes_{\textnormal{Id}}\textnormal{KL}(V,\omega,J)$ with respect to the identity representation $\textnormal{Id}:\textnormal{KL}(V,\omega,J)\hookrightarrow \textnormal{GL}(V)$.
Motivated by the definition of \'etale affine representations given in \cite{K} and \cite{M}, we set up the following definition: \begin{definition} An \emph{\'etale K\"ahler affine representation} of a Lie group $G$ is a Lie group homomorphism $\rho:G\to V\rtimes_{\textnormal{Id}}\textnormal{KL}(V,\omega,J)$ such that the left action of $G$ on $V$ defined by $g\cdot v:=\rho(g)(v)$ for all $g\in G$ and $v\in V$, admits a point of open orbit and discrete isotropy. \end{definition} As the Lie algebras of a connected Lie group $G$ and its universal covering Lie group $\widetilde{G}$ are isomorphic, from now on we assume that $G$ is simply connected. The following is a characterization of left invariant flat special K\"ahler structures. \begin{theorem}\label{etale} Let $G$ be a simply connected Lie group with Lie algebra $\mathfrak{g}$. Then $G$ can be equipped with a structure of special K\"ahler Lie group $(\omega,J,\nabla)$ with $\nabla J=0$ if and only if it admits an \'etale K\"ahler affine representation by a K\"ahler vector space $(V,\omega,J)$. \end{theorem} \begin{proof} Suppose that $(G,\omega^+,J,\nabla)$ is a simply connected special K\"ahler Lie group such that $\nabla J=0$. Let $(\mathfrak{g},\omega,j,\cdot)$ be the special K\"ahler Lie algebra associated to $(G,\omega^+,J,\nabla)$. Namely, if $e$ denotes the identity of $G$, we have that $\omega:=\omega^+_e$, $j:=J_e$, and $x\cdot y=L_x(y):= (\nabla_{x^+}y^+)(e)$. Given that asking for $\nabla J=0$ is equivalent to require that $L_x\circ j=j\circ L_x$ for all $x\in\mathfrak{g}$, it follows that the map $\theta:\mathfrak{g} \to \mathfrak{g}\rtimes_{\textnormal{id}}\mathfrak{kl}(\mathfrak{g},\omega,j)$ defined by $\theta(x):=(x,L_x)$, is a well defined Lie algebra homomorphism. Thus, passing by the exponential of $G$, we get an \'etale K\"ahler affine representation $\rho\colon G \to \mathfrak{g}\rtimes_{\textnormal{Id}}\textnormal{KL}(\mathfrak{g},\omega,j)$ determined by $$\rho(\exp_G(x))=\left(\sum_{m=1}^\infty \dfrac{1}{m!}(L_x)^{m-1}(x),\sum_{m=0}^\infty \dfrac{1}{m!}(L_x)^m\right),$$ for which clearly $0\in\mathfrak{g}$ is a point of open orbit and discrete isotropy.
Conversely, let $\rho:G\to V\rtimes_{\textnormal{Id}}\textnormal{KL}(V,\omega,J)$, defined as $\rho(g):=(Q(g),F_g)$ for all $g\in G$, be an \'etale K\"ahler affine representation of $G$. Let $v\in V$ be a point of open orbit and discrete isotropy. This implies that the orbital map $\pi\colon G\to \textnormal{Orb}(v)$ defined by $g\mapsto Q(g)+F_g(v)$ is a local diffeomorphism. Differentiating at the identity of $G$, we obtain a Lie algebra homomorphism $\theta\colon \mathfrak{g} \to V\rtimes_{\textnormal{id}}\mathfrak{kl}(V,\omega,J)$ given by $x \mapsto (q(x),f_x)$, where the linear map $\psi_v\colon \mathfrak{g} \to V$ defined by $x \mapsto q(x)+f_x(v)$ is a linear isomorphism; see \cite{M}. Moreover, the map $f:\mathfrak{g}\to \mathfrak{kl}(V,\omega,J)$ is also a Lie algebra homomorphism and $q:\mathfrak{g}\to V$ is a Lie algebra 1-cocycle with respect to the linear representation $f$. Let us now define the following objects on $\mathfrak{g}$: \begin{enumerate} \item[$\iota.$] the skew-symmetric bilinear form $\widetilde{\omega}(x,y):=\omega(\psi_v(x),\psi_v(y))$, \item[$\iota\iota.$] the product $x\cdot y=L_x(y):=(\psi_v^{-1}\circ f_x\circ \psi_v)(y)$, and \item[$\iota\iota\iota.$] the complex structure $j(x):=(\psi_v^{-1}\circ J\circ \psi_v)(x)$. \end{enumerate}
The fact that $q$ is a Lie algebra 1-cocycle with respect to $f$ implies that $[x,y]=x\cdot y - y\cdot x$ for all $x,y\in \mathfrak{g}$. As $f$ is a linear representation and $f_x\in \mathfrak{kl}(V,\omega,J)$, we get that $L_{[x,y]}=[L_x,L_y]$. For last two reasons, it follows that $\cdot$ must be a left symmetric product on $\mathfrak{g}$.
Clearly $\widetilde{\omega}$ is non-degenerate. Furthermore, the fact that $f_x\in \mathfrak{kl}(V,\omega,J)$ implies \begin{equation}\label{3} \widetilde{\omega}(L_x(y),z)+\widetilde{\omega}(y,L_x(z))=0,\qquad x,y,z\in\mathfrak{g}. \end{equation} Identity \eqref{3} and the fact that $[x,y]=L_x(y)-L_y(x)$ implies that $\omega$ is a 2-cocycle; visit \cite{V}.
Finally, a straightforward computation allows us to conclude that the fact that $f_x\circ J= J\circ f_x$ for all $x\in \mathfrak{g}$ implies that $$[j(x),j(y)]-[x,y]=j[j(x),y]+j[x,j(y)]\qquad \textnormal{and}\qquad (j\circ L_x)(y)=(L_x\circ j)(y)$$ for all $x,y\in \mathfrak{g}$. That is, $j$ is integrable and it satisfies $[j,L_x]=0$ for all $x\in\mathfrak{g}$.
Note that $$\widetilde{k}(x,y)=\widetilde{\omega}(x,j(y))=\omega(\psi_v(x), (J\circ \psi_v)(y))=k(\psi_v(x),\psi_v(y)),$$ defines a scalar product on $\mathfrak{g}$ whose signature agrees with the signature of $k$. Hence, the quadruple $(\mathfrak{g},\widetilde{\omega},j,\cdot)$ is a special K\"ahler Lie algebra for which we may induce a left invariant special K\"ahler structure $(\omega,J,\nabla)$ on $G$ such that $\nabla J=0$. \end{proof} \subsection{Hessian property} Let $(M,\nabla)$ be a flat affine manifold. A pseudo-Riemannian metric $k$ on $M$ is said to be \emph{Hessian} with respect to $\nabla$ if using the affine coordinates of $M$ induced by $\nabla$ it can be locally written as $k=\nabla^2\varphi$ where $\varphi$ is a local smooth function. That is, $$k=\sum_{i,j}\dfrac{\partial^2 \varphi}{\partial x_i \partial x_j}dx^i\otimes dx^j,$$ where $(x_1,\cdots,x_n)$ is a system of local affine coordinates of $M$ induced by $\nabla$; see \cite{SY} for further details. It is well known that the pseudo-metric of a special K\"ahler manifold is indeed Hessian; compare \cite{F}. We will use a different approach to prove this fact for the left invariant case.
When $M=G$ is a connected Lie group and both $\nabla$ and $k$ are left invariant, the triple $(G,\nabla,k)$ is called a \emph{pseudo-Hessian Lie group}. The infinitesimal object associated to a pseudo-Hessian Lie group is the following: \begin{definition}\cite{S} Let $(\mathfrak{g},\cdot)$ be a finite dimensional left symmetric algebra. A scalar product $k$ on $\mathfrak{g}$ is said to be \emph{left symmetric} if it verifies \begin{equation}\label{HessianCondition} k(x\cdot y-y\cdot x,z)=k(x,y\cdot z)-k(y,x\cdot z),\qquad x,y,z\in\mathfrak{g}. \end{equation} Moreover, if $\mathfrak{g}$ is a Lie algebra whose Lie bracket is given by the commutator of $\cdot$ then the triple $(\mathfrak{g},\cdot,k)$ is called a \emph{pseudo-Hessian Lie algebra}. \end{definition}
\begin{lemma}\cite{S} There exists a bijective correspondence between simply connected pseudo-Hessian Lie groups and pseudo-Hessian Lie algebras. \end{lemma} As a consequence of \cite[Lemma. 4.1]{V} we get the following. Let $(G,\omega^+,J^+,\nabla)$ be a special K\"ahler Lie group and let $(\mathfrak{g},\omega,j,\cdot)$ be its respective special K\"ahler Lie algebra. If $k$ is the scalar product on $\mathfrak{g}$ induced by $(\omega,j)$, then we have the relation $$k(x,y)=\omega(x,j(y)),\qquad x,y\in\mathfrak{g}.$$ Recall that $j\in Z_L^1(\mathfrak{g},\mathfrak{g})$. Thus, for all $x,y,z\in\mathfrak{g}$ we obtain \begin{eqnarray*}
k(x\cdot y-y\cdot x,z) & = & k([x,y],z)\\
& = & \omega([x,y],j(z))\\
& = & -\omega(j([x,y]),z)\\
& = & -\omega(L_x(j(y))-L_y(j(x)),z)\\
& = & -\omega(L_x(j(y)),z)+\omega(L_y(j(x)),z)\\
& = & \omega(j(y),x\cdot z)-\omega(j(x),y\cdot z)\\
& = & \omega(x,j(y\cdot z))-\omega(y,j(x\cdot z))\\
& = & k(x,y\cdot z)-k(y,x\cdot z). \end{eqnarray*} So, the triple $(\mathfrak{g},\cdot,k)$ defines a pseudo-Hessian Lie algebra and hence the left invariant pseudo-Riemannian metric $k^+$ induced by $(\omega^+,J^+)$ is a Hessian pseudo-metric on $G$.
\begin{remark} The Hessian pseudo-metric $k^+$ allows us to define a structure of pseudo-K\"ahler Lie group on the cotangent bundle of a special K\"ahler Lie group. This left invariant pseudo-K\"ahler structure will be described in the next section; see Lie bracket \eqref{Hess1}, symplectic form \eqref{Hess2}, and complex structure of item $\iota\iota\iota.$ The integrability of such a complex structure follows from the fact that $k$ is a left symmetric scalar product, that is, it verifies identity \eqref{HessianCondition}. \end{remark}
\section{Cotangent bundle of a flat pseudo-Riemannian Lie group} The aim of this section is to give a method for constructing left invariant special K\"ahler structures on the cotangent bundle of a simply connected flat pseudo-Riemannian Lie group. We will use the construction of left invariant flat affine symplectic structures introduced in \cite{V} (see also \cite{NB}) starting from a connected flat affine Lie group. The collection of groups constructed here are examples of flat special K\"ahler manifolds. See for instance \cite{BC1} to know more about it.
Let us consider the triple $(G,k^+,\nabla)$ where $(G,\nabla)$ is a flat affine Lie group ($\nabla$ is a left invariant flat affine connection) and $(G,k^+)$ is a pseudo-Riemannian Lie group ($k^+$ is a left invariant pseudo-Riemannian metric). Let $\mathfrak{g}$ be the Lie algebra of $G$ and recall that from previous section we are assuming that $G$ is simply connected.
With the data $(G,\nabla)$ it is possible to construct the following objects. Let $L^\ast:\mathfrak{g}\to \mathfrak{gl}(\mathfrak{g}^*)$ denote the dual linear representation associated $L:\mathfrak{g}\to \mathfrak{gl}(\mathfrak{g})$ where $L_x(y)=x\cdot y:=(\nabla_{x^+}y^+)(e)$ for all $x,y\in\mathfrak{g}$. Passing to exponential, we get a Lie group homomorphism $F:G \to \textnormal{GL}(\mathfrak{g}^*)$ determined by $$F(\exp_G(x))=\sum_{m=0}^{\infty}\dfrac{1}{m!}(L_x^\ast)^m.$$ The cotangent bundle $T^\ast G\approx G\times \mathfrak{g}^\ast$ may be endowed with a Lie group structure given by $$(g,\alpha)\cdot(g',\beta)=(gg',F(g)(\beta)+\alpha),\qquad g,g'\in G\quad \textnormal{and}\quad\alpha,\beta\in\mathfrak{g}^\ast,$$ whose Lie algebra is the vector space $\mathfrak{g}\oplus \mathfrak{g}^\ast$ with Lie bracket \begin{equation}\label{Hess1} [x+\alpha,y+\beta]_\ast=[x,y]+L^*_x(\beta)-L^*_y(\alpha),\qquad x,y\in\mathfrak{g}\quad \textnormal{and}\quad\alpha,\beta\in\mathfrak{g}^\ast. \end{equation} \begin{enumerate} \item[$\iota.$] In \cite{MR} the authors proved that \begin{equation}\label{Hess2} \omega(x+\alpha,y+\beta)=\alpha(y)-\beta(x), \end{equation} defines a scalar 2-cocycle on $(\mathfrak{g}\oplus \mathfrak{g}^\ast, [\cdot,\cdot]_\ast)$. \end{enumerate} As $\mathfrak{g}$ and $\mathfrak{g}^\ast$ are Lagrangian Lie subalgebras of $(\mathfrak{g}\oplus \mathfrak{g}^\ast, [\cdot,\cdot]_\ast,\omega)$, by the Frobenius Theorem, they induce two left invariant Lagrangian transverse foliations on $T^\ast G$. Associated to such foliations, there exists a unique torsion free symplectic connection $\nabla^H$ which parallelizes both foliations. In the literature $\nabla^H$ is known as the Hess connection or the canonical connection of a bi-Lagrangian manifold (see \cite{H} for more details and the definition of the Hess connection). \begin{enumerate} \item[$\iota\iota.$] In \cite{V} the author showed that the Hess connection $\nabla^H$ associated to such a left invariant Lagrangian transverse foliations is also left invariant, flat, and it can be explicitly determined by \begin{equation}\label{HessConnection} \nabla^H_{(x+\alpha)^+}(y+\beta)^+=(xy+L_x^\ast(\beta))^+\qquad x,y\in\mathfrak{g}\quad \textnormal{and}\quad \alpha,\beta \in\mathfrak{g}^\ast \end{equation} where $xy=(\nabla_{x^+}y^+)(e)$. \end{enumerate} With the data $(G,k^+)$ we can define the following complex structure on $\mathfrak{g}\oplus \mathfrak{g}^\ast$. If we denote by $k:=k^+_e$ the scalar product on $\mathfrak{g}$, then we have that $k^\flat:\mathfrak{g}\to \mathfrak{g}$ defined by $k^\flat(x)=k(x,\cdot)$ is a linear isomorphism. \begin{enumerate} \item[$\iota\iota\iota.$] Define the complex structure $j$ on $\mathfrak{g}\oplus \mathfrak{g}^\ast$ as $$ j(x+\alpha)=z_\alpha-k^\flat(x)\qquad x\in\mathfrak{g}\quad \textnormal{and}\quad \alpha\in\mathfrak{g}^\ast $$ where $z_\alpha$ is the unique element in $\mathfrak{g}$ such that $\alpha=k^\flat(z_\alpha)$. This is integrable with respect to the Lie bracket \eqref{Hess1} if and only if \begin{equation}\label{Left2-cocycle} k([x,y],z)=k(x,L_y(z))-k(y,L_x(z)),\qquad x,y,z\in\mathfrak{g}. \end{equation} For the definition of such a complex structure see for instance \cite{DM}. \end{enumerate} Let $\cdot^H$ denote the left symmetric product on $\mathfrak{g}\oplus \mathfrak{g}^\ast$ determined by $\nabla^H$ and $L^H$ its associated linear representation. Note that, on the one hand $$j([x+\alpha,y+\beta]_\ast)=-j(k(z_\beta,L_x(\cdot)))+j(k(z_\alpha,L_y(\cdot)))-k^\flat([x,y]),$$ and on the other hand, $$L^H_{(x+\alpha)}(j(y+\beta))-L^H_{(y+\beta)}(j(x+\alpha))= L_x(z_\beta)-L_y(z_\alpha)+k(y,L_x(\cdot))-k(x,L_y(\cdot)).$$ Thus, $j\in Z^1_{L^H}(\mathfrak{g}\oplus \mathfrak{g}^\ast,\mathfrak{g}\oplus \mathfrak{g}^\ast)$ if and only if $k$ satisfies identity \eqref{Left2-cocycle} and
\begin{equation}\label{flatness} k(z_\beta,L_x(\cdot))-k(z_\alpha,L_y(\cdot))=-k^\flat(L_x(z_\beta))+k^\flat(L_y(z_\alpha)), \end{equation} for all $x,y\in\mathfrak{g}$ and $\alpha,\beta\in\mathfrak{g}^\ast$. \begin{lemma} Identities \eqref{Left2-cocycle} and \eqref{flatness} hold true if and only if $\nabla$ is the Levi--Civita connection associated to $(G,k^+)$. \end{lemma} \begin{proof} Because the Levi--Civita connection associated to a left invariant pseudo-Riemannian metric trivially satisfies identity \eqref{Left2-cocycle}, it is enough to prove that identity \eqref{flatness} holds true if and only if $\nabla$ is the Levi-Civita connection determined by $(G,k^+)$. If $\nabla$ is the Levi--Civita connection, then it follows that $$k(L_x(y),z)+k(y,L_x(z))=0,\qquad x,y,z\in\mathfrak{g}.$$ It is easy to see that last identity implies \eqref{flatness}. Conversely, if identity \eqref{flatness} is true, then when $y=0$ and $z_\alpha=z_\beta$ we get that $k(z_\beta,L_x(\cdot))=-k(L_x(z_\beta),\cdot)$ for all $x\in \mathfrak{g}$ and $\beta\in \mathfrak{g}^\ast$. Thus, the fact that $k^\flat$ is a linear isomorphism implies that $\nabla$ must be the Levi-Civita connection associated to $(G,k^+)$. \end{proof} Summing up, we obtain the following result: \begin{theorem} The quadruple $(\mathfrak{g}\oplus \mathfrak{g}^\ast,\omega,j,\cdot^H)$ defines a special K\"ahler Lie algebra if and only if $(G,k^+)$ is a flat pseudo-Riemannian Lie group with Levi-Civita connection $\nabla$. \end{theorem} \begin{remark} The pseudo-Riemannian metric $\widetilde{k}^+$ on $T^\ast G$ induced by the pair $(\omega,j)$ is determined by the scalar product $$\widetilde{k}(x+\alpha,y+\beta)=k(x,y)+k(z_\alpha,z_\beta)\qquad x,y\in\mathfrak{g}\quad \textnormal{and}\quad \alpha,\beta \in\mathfrak{g}^\ast.$$ Therefore, if the signature of $k$ is $(p,q)$, then the signature of $\widetilde{k}$ is $(2p,2q)$. This may be easily verified by taking up an orthonormal basis of $(\mathfrak{g},k)$. In particular, if $(G,k^+)$ is a flat Riemannian Lie group, then $(T^\ast G,\omega,j,\nabla^H)$ is a special K\"ahler Lie group with $\widetilde{k}^+$ a left invariant Riemannian metric. \end{remark}
\begin{corollary} $\nabla^H j=0$. \end{corollary} \begin{proof} This is a straightforward computation using the fact that $\nabla$ is the Levi--Civita connection associated to $k^+$. \end{proof}
\begin{corollary} The Hess connection $\nabla^H$ is geodesically complete if and only if $G$ is unimodular. \end{corollary} \begin{proof} In \cite{V} was proved that $\nabla^H$ is geodesically complete if and only if $\nabla$ is geodesically complete. On the other hand, in \cite{AM} the authors showed that the Levi--Civita connection associated to a left invariant flat pseudo-Riemannian metric is geodesically complete if and only if $G$ is unimodular. So, the result follows. \end{proof} Using the previous simple construction and the constructions of flat pseudo-Riemannian Lie groups introduced in \cite{Au,AM} it is possible to get several interesting examples of flat special K\"ahler Lie groups. For instance: \begin{example} On the one hand, the special K\"ahler Lie group $G_1$ of Example \ref{ExampleD4} can be obtained from the 2-dimension flat Lorentzian Lie group $(\textnormal{Aff}(\mathbb{R})_0,k_1)$ where $k_1=\dfrac{1}{x^2}(dx\otimes dy+dy\otimes dx)$. Given that $\textnormal{Aff}(\mathbb{R})_0$ is not unimodular, the Hess connection $\nabla^H$ on $G_1$ is not geodesically complete.
On the other hand, the special K\"ahler Lie group $G_2$ of Example \ref{ExampleD6} can be obtained from the 3-dimension flat Lorentzian Lie group $(H_3,k_2)$ where $H_3$ is the 3-dimensional Heisenberg Lie group and $k_2=dx\otimes dz+dy\otimes dy+dz\otimes dx-x(dx\otimes dy+ dy\otimes dx)$. Given that $H_3$ is unimodular, the Hess connection $\nabla^H$ on $G_2$ is geodesically complete. The $(2n+1)$-dimensional Heisenberg group $H_{2n+1}$ does not admit a structure of flat pseudo-Riemannian Lie group for all $n\geq 2$; see \cite{AM}. \end{example}
\section{Twisted cartesian products}
The twisted product of Lie groups (resp. Lie algebras) defined by means of linear representations is a well known construction which may be viewed as the generalization of the semidirect product of groups (resp. algebras); see for instance \cite{R}. In this section we explain a way for constructing a structure of special K\"ahler Lie algebra on the cartesian product of two special K\"ahler Lie algebras which is twisted by two linear representations by infinitesimal K\"ahler transformations of such Lie algebras. First steps of the construction that we will introduce below are motivated by a construction of flat pseudo-Riemannian metrics made in \cite{Au}. It is worth mentioning here that our twisted cartesian product can be obtained by a ``double reduction'' process that we shall explain at the end of the section. As we will see later, by using this construction we can obtain examples of non-trivial left invariant special K\"ahler structures in every even dimension $\geq 4$.
Recall that the group of linear K\"ahler transformations of K\"ahler vector space $(V,\omega,J)$ is defined as the Lie group $\textnormal{KL}(V,\omega,J):=\textnormal{Sp}(V,\omega)\cap \textnormal{GL}(V,J)$ whose Lie algebra is given by $\mathfrak{kl}(V,\omega,J)=\mathfrak{sp}(V,\omega)\cap \mathfrak{gl}(V,J)$. \begin{Assumption}
Let $(\mathfrak{g}_1,\omega_1,j_1,\cdot_1)$ and $(\mathfrak{g}_2,\omega_2,j_2,\cdot_2)$ be two special K\"ahler Lie algebras for which there exist two linear Lie algebra representations
$$\theta:\mathfrak{g}_1\to \mathfrak{kl}(\mathfrak{g}_2,\omega_2,j_2)\qquad\textnormal{and}\qquad\rho:\mathfrak{g}_2\to \mathfrak{kl}(\mathfrak{g}_1,\omega_1,j_1).$$ \end{Assumption} Our goal here is to define a structure of special K\"ahler Lie algebra on $\mathfrak{g}_1\oplus \mathfrak{g}_2$ using these two representations $(\theta,\rho)$ so that both $\mathfrak{g}_1$ and $\mathfrak{g}_2$ become special K\"ahler Lie subalgebras of it. \begin{enumerate}
\item[$\iota.$] Let us begin by defining our candidate of left symmetric product on $\mathfrak{g}_1\oplus \mathfrak{g}_2$. Let $L:\mathfrak{g}_1\to\mathfrak{gl}(\mathfrak{g}_1)$ and $L':\mathfrak{g}_2\to\mathfrak{gl}(\mathfrak{g}_2)$ denote the linear representations induced by the left symmetric products $\cdot_1$ and $\cdot_2$, respectively. We will use the linear transformations $(\theta,\rho)$ for defining the action of $\mathfrak{g}_1$ over $\mathfrak{g}_2$, and vice-versa, as $\theta(x_1)(x_2)=x_1\cdot x_2$ and $\rho(x_2)(x_1)=x_2\cdot x_1$ for all $x_1\in\mathfrak{g}_1$ and $x_2\in\mathfrak{g}_2$. Accordingly, we define the product $\cdot$ on $\mathfrak{g}_1\oplus \mathfrak{g}_2$ as
\begin{equation}\label{TwistedProduct}
(x_1+x_2)\cdot(y_1+y_2)=L_{x_1}(y_1)+\rho(x_2)(y_1)+\theta(x_1)(y_2)+L'_{x_2}(y_2).
\end{equation}
Let us now deduce the conditions needed for $\cdot$ to be a left symmetric product. We just need to look at the general expression for the associator $(x_1+x_2,y_1+y_2,z_1+z_2)$ of $\cdot$ in $\mathfrak{g}_1\oplus \mathfrak{g}_2$. Indeed,
\begin{eqnarray*}
& & (x_1+x_2,y_1+y_2,z_1+z_2)\\
& = & (x_1+x_2)\cdot((y_1+y_2)\cdot(z_1+z_2))-((x_1+x_2)\cdot(y_1+y_2))\cdot(z_1+z_2)\\
& = & x_1\cdot_1 (y_1\cdot_1 z_1)-(x_1\cdot_1 y_1)\cdot_1 z_1+x_1\cdot_1\rho(y_2)(z_1)+\rho(x_2)(y_1\cdot_1 z_1)\\
& + & \rho(x_2)(\rho(y_2)(z_1))-\rho(\theta(x_1)(y_2))(z_1)-\rho(x_2)(y_1)\cdot_1 z_1-\rho(x_2\cdot_2 y_2)(z_1)\\
& + & x_2\cdot_2 (y_2\cdot_2 z_2)-(x_2\cdot_2 y_2)\cdot_2 z_2+\theta(x_1)(\theta(y_1)(z_2))+\theta(x_1)(y_2\cdot_2 z_2)\\
& + & x_2\cdot_2 \theta(y_1)(z_2)-\theta(x_1\cdot_1 y_1)(z_2)-\theta(x_1)(y_2)\cdot_2 z_2-\theta(\rho(x_2)(y_1))(z_2).
\end{eqnarray*}
Recall that both $\rho$ and $\theta$ are Lie algebra representations. Moreover, $(x_1,y_1,z_1)=(y_1,x_1,z_1)$ and $(x_2,y_2,z_2)=(y_2,x_2,z_2)$ are verified in $\mathfrak{g}_1$ and $\mathfrak{g}_2$, respectively since $\cdot_1$ and $\cdot_2$ are left symmetric products. Therefore, the identity $(x_1+x_2,y_1+y_2,z_1+z_2)=(y_1+y_2,x_1+x_2,z_1+z_2)$ holds in $\mathfrak{g}_1\oplus \mathfrak{g}_2$ if and only if
\begin{equation}\label{Twisted1}
L_{x_1}\circ\rho(x_2)-\rho(x_2)\circ L_{x_1}=\rho(\theta(x_1)(x_2))-L_{\rho(x_2)(x_1)},\qquad\textnormal{and}
\end{equation}
\begin{equation}\label{Twisted2}
L'_{x_2}\circ\theta(x_1)-\theta(x_1)\circ L'_{x_2}=\theta(\rho(x_2)(x_1))-L'_{\theta(x_1)(x_2)}
\end{equation}
for all $x_1\in\mathfrak{g}_1$ and $x_2\in\mathfrak{g}_2$.
\item[$\iota\iota.$] The Lie bracket on $\mathfrak{g}_1\oplus \mathfrak{g}_2$ is defined as the commutator of the left symmetric product \eqref{TwistedProduct}. Namely,
\begin{eqnarray*}
& & [x_1+x_2,y_1+y_2]_\dagger\\
& = & (x_1+x_2)\cdot(y_1+y_2)-(y_1+y_2)\cdot(x_1+x_2)\\
& = & [x_1,y_1]_{\mathfrak{g}_1}+\rho(x_2)(y_1)-\rho(y_2)(x_1)+\theta(x_1)(y_2)-\theta(y_1)(x_2)+[x_2,y_2]_{\mathfrak{g}_2}.
\end{eqnarray*} As the commutator of every left symmetric product always allows us to define a structure of Lie algebra, we have that $(\mathfrak{g}_1\oplus \mathfrak{g}_2,[\cdot,\cdot]_\dagger)$ is indeed a Lie algebra. After assuming identities \eqref{Twisted1} and \eqref{Twisted2} we see that the Lie algebra structure on $\mathfrak{g}_1\oplus \mathfrak{g}_2$ may be defined through the representations $(\theta,\rho)$ by setting $[x_1,x_2]_\dagger:=\theta(x_1)(x_2)-\rho(x_2)(x_1)$ for all $x_1\in\mathfrak{g}_1$ and $x_2\in\mathfrak{g}_2$.
\item[$\iota\iota\iota.$] The symplectic structure on $\mathfrak{g}_1\oplus \mathfrak{g}_2$ is defined as
$$\omega(x_1+x_2,y_1+y_2)=\omega_1(x_1,y_1)+\omega_2(x_2,y_2),$$
for all $x_1,y_1\in\mathfrak{g}_1$ and $x_2,y_2\in\mathfrak{g}_2$. On the one hand, it is clear that $\omega$ is skew-symmetric and non-degenerate. Moreover, it is simple to check that
\begin{eqnarray*}
& & \omega([x_1+x_2,y_1+y_2]_\dagger,z_1+z_2)\\
& = & \omega_1([x_1,y_1],z_1)+\omega_1(\rho(x_2)(y_1),z_1)-\omega_1(\rho(y_2)(x_1),z_1)\\
& + & \omega_2([x_2,y_2],z_2)+\omega_2(\theta(x_1)(y_2),z_2)-\omega_2(\theta(y_1)(x_2),z_2).
\end{eqnarray*}
As we have that both $\omega_1$ and $\omega_2$ are scalar 2-cocycles, $\rho(a_2)\in \mathfrak{sp}(\mathfrak{g}_1,\omega_1)$, and $\theta(a_1)\in \mathfrak{sp}(\mathfrak{g}_2,\omega_2)$ for all $a_1\in\mathfrak{g}_1$ and $a_2\in\mathfrak{g}_2$ it follows that
$$\oint \omega([x_1+x_2,y_1+y_2]_\dagger,z_1+z_2)=0.$$
That is, $\omega$ is a scalar 2-cocycle on $\mathfrak{g}_1\oplus \mathfrak{g}_2$. On the other hand, the fact that $\rho(a_2),L_{a_1}\in \mathfrak{sp}(\mathfrak{g}_1,\omega_1)$ and $\theta(a_1),L'_{a_2}\in \mathfrak{sp}(\mathfrak{g}_2,\omega_2)$ for all $a_1\in\mathfrak{g}_1$ and $a_2\in\mathfrak{g}_2$ implies that
$$\omega((x_1+x_2)\cdot(y_1+y_2),z_1+z_2)+\omega(y_1+y_2,(x_1+x_2)\cdot(z_1+z_2))=0.$$
In other words, $\cdot$ is symplectic with respect to $\omega$.
\item[$\iota\nu.$] Finally, the complex structure $j$ on $\mathfrak{g}_1\oplus \mathfrak{g}_2$ is defined as
$$j(x_1+x_2)=j_1(x_1)+j_2(x_2),$$
for all $x_1\in\mathfrak{g}_1$ and $x_2\in\mathfrak{g}_2$. Clearly, $j^2=-1$. We have to check the requirements that we need to ensure that $j$ is an integrable complex structure and it belongs to $Z_{\widetilde{L}}^1(\mathfrak{g}_1\oplus \mathfrak{g}_2,\mathfrak{g}_1\oplus \mathfrak{g}_2)$ where $\widetilde{L}$ is the linear representation induced by the left symmetric product $\cdot$ defined in \eqref{TwistedProduct}. Let us first take a look at the integrability conditions.
On the one side,
\begin{eqnarray*}
& & [j(x_1+x_2),j(y_1+y_2)]_\dagger-[x_1+x_2,y_1+y_2]_\dagger \\
& = & [j_1(x_1),j_1(y_1)]-[x_1,y_1]-\rho(j_2(y_2))(j_1(x_1))+\rho(j_2(x_2))(j_1(y_1))\\
& - & \rho(x_2)(y_1)+\rho(y_2)(x_1)+[j_2(x_2),j_2(y_2)]-[x_2,y_2]\\
& + &\theta(j_1(x_1))(j_2(y_2))-\theta(j_1(y_1))(j_2(x_2))+\theta(y_1)(x_2)-\theta(x_1)(y_2).
\end{eqnarray*}
On the other side,
\begin{eqnarray*}
& & j([j(x_1+x_2),y_1+y_2]_\dagger+[x_1+x_2,j(y_1+y_2)]_\dagger)\\
& = & j_1([j_1(x_1),y_1]+[x_1,j_1(y_1)])-j_1(\rho(y_2)(j_1(x_1)))+j_1(\rho(j_2(x_2))(y_1))\\
& - & j_1(\rho(j_2(y_2))(x_1))+j_1(\rho(x_2)(j_1(y_1)))+j_2([j_2(x_2),y_2]+[x_2,j_2(y_2)])\\
& + & j_2(\theta(j_1(x_1))(y_2))-j_2(\theta(y_1)(j_2(x_2)))+j_2(\theta(x_1)(j_2(y_2)))-j_2(\theta(j_1(y_1))(x_2)).
\end{eqnarray*}
Recall that $j_1$ and $j_2$ are integrable complex structures on $\mathfrak{g}_1$ and $\mathfrak{g}_2$, respectively. Therefore, $j$ is an integrable complex structure on $\mathfrak{g}_1\oplus \mathfrak{g}_2$ if and only if
$$j_1\circ\rho(x_2)\circ j_1+\rho(x_2)=[\rho(j_2(x_2)),j_1],\qquad \textnormal{and}$$
$$j_2\circ\theta(x_1)\circ j_2+\theta(x_1)=[\theta(j_1(x_1)),j_2]$$
for all $x_1\in\mathfrak{g}_1$ and $x_2\in\mathfrak{g}_2$. But we must recall that $\rho(x_2)\in \mathfrak{gl}(\mathfrak{g}_1,j_1)$ which means that $\rho(x_2)$ and $j_1$ commute. Thus, $j_1\circ\rho(x_2)\circ j_1+\rho(x_2)=0=[\rho(j_2(x_2)),j_1]$ for all $x_2\in\mathfrak{g}_2$. The same is true for the second identity that we got above because $\theta(x_1)\in \mathfrak{gl}(\mathfrak{g}_2,j_2)$. As consequence, under the assumptions that we have, the complex structure $j$ is always integrable.
Let us now see what happens with the 1-cocycle condition. Note that on the one hand
\begin{eqnarray*}
j([x_1+x_2,y_1+x_2]_\dagger) & = & j_1([x_1,y_1])-j_1(\rho(y_2)(x_1))+j_1(\rho(x_2)(y_1))\\
& + & j_2([x_2,y_2])+j_2(\theta(x_1)(y_2))-j_2(\theta(y_1)(x_2)).
\end{eqnarray*}
On the other hand,
\begin{eqnarray*}
& & (x_1+x_2)\cdot j(y_1+y_2)-(y_1+y_2)\cdot j(x_1+x_2)\\
& = & x_1\cdot_1 j_1(y_1)-y_1\cdot_1 j_1(x_1)+\rho(x_2)(j_1(y_1))-\rho(y_2)(j_1(x_1))\\
& + & x_2\cdot_2 j_2(y_2)-y_2\cdot_2 j_2(x_2)+\theta(x_1)(j_2(y_2))-\theta(y_1)(j_2(x_2)).
\end{eqnarray*}
Recall that $j_1\in Z_{L}^1(\mathfrak{g}_1,\mathfrak{g}_1)$ and $j_2\in Z_{L'}^1(\mathfrak{g}_2,\mathfrak{g}_2)$. Therefore, $j\in Z_{\widetilde{L}}^1(\mathfrak{g}_1\oplus \mathfrak{g}_2,\mathfrak{g}_1\oplus \mathfrak{g}_2)$ if and only if
$$j_1\circ \rho(x_2)=\rho(x_2)\circ j_1\qquad\textnormal{and}\qquad j_2\circ \theta(x_1)=\theta(x_1)\circ j_2,$$
for all $x_1\in\mathfrak{g}_1$ and $x_2\in\mathfrak{g}_2$ which always holds true since $\rho(x_2)\in \mathfrak{gl}(\mathfrak{g}_1,j_1)$ and $\theta(x_1)\in \mathfrak{gl}(\mathfrak{g}_1,j_1)$.
\item[$\nu.$] Let $k_1$ and $k_2$ be the scalar products induced by $(\omega_1,j_1)$ and $(\omega_2,j_2)$, respectively. Then the scalar product $k$ induced by $(\omega,j)$ on $\mathfrak{g}_1\oplus \mathfrak{g}_2$ is given by
$$k(x_1+x_2,y_1+y_2)=k_1(x_1,y_1)+k_2(x_2,y_2).$$ \end{enumerate} Summing up: \begin{theorem}\label{TwistedAlgebra}
Let $(\mathfrak{g}_1,\omega_1,j_1,\cdot_1)$ and $(\mathfrak{g}_2,\omega_2,j_2,\cdot_2)$ be two special K\"ahler Lie algebras for which there exist two linear representations $\theta:\mathfrak{g}_1\to \mathfrak{kl}(\mathfrak{g}_2,\omega_2,j_2)$ and $\rho:\mathfrak{g}_2\to \mathfrak{kl}(\mathfrak{g}_1,\omega_1,j_1)$ that verify the identities \eqref{Twisted1} and \eqref{Twisted2}. Then the vector space $\mathfrak{g}_1\oplus \mathfrak{g}_2$ equipped with
\begin{enumerate}
\item[$\iota.$] the Lie bracket $[\cdot,\cdot]_\dagger$:
$$[x_1+x_2,y_1+y_2]_\dagger=[x_1,y_1]_{\mathfrak{g}_1}+\rho(x_2)(y_1)-\rho(y_2)(x_1)+\theta(x_1)(y_2)-\theta(y_1)(x_2)+[x_2,y_2]_{\mathfrak{g}_2},$$
\item[$\iota\iota.$] the non-degenerate scalar 2-cocycle $\omega$:
$$\omega(x_1+x_2,y_1+y_2)=\omega_1(x_1,y_1)+\omega_2(x_2,y_2),$$
\item[$\iota\iota\iota.$] the left symmetric product $\cdot$:
$$(x_1+x_2)\cdot(y_1+y_2)=L_{x_1}(y_1)+\rho(x_2)(y_1)+\theta(x_1)(y_2)+L'_{x_2}(y_2),$$
\item[$\iota\nu.$] and the integrable complex structure $j$:
$$j(x_1+x_2)=j_1(x_1)+j_2(x_2),$$
defines another special K\"ahler Lie algebra.
\end{enumerate} \end{theorem} Motivated for the previous result we set up the following definition: \begin{definition}
The special K\"ahler Lie algebra from Theorem \ref{TwistedAlgebra} is called \emph{twisted cartesian product} of $\mathfrak{g}_1$ and $\mathfrak{g}_2$ according to the representations $(\theta,\rho)$. \end{definition} Next remarks come in order: \begin{remark}
\begin{enumerate}
\item[$\iota.$] The name twisted cartesian product comes from the fact that if both $\theta$ and $\rho$ are the zero representations, then the special K\"ahler Lie algebra that we get from Theorem \ref{TwistedAlgebra} is the trivial one which is defined through the cartesian product.
\item[$\iota\iota.$] If $\theta=0$, then the twisted cartesian product becomes in the special K\"ahler Lie algebra obtained as the semi-direct product of $\mathfrak{g}_2$ with $\mathfrak{g}_1$ by means of $\rho$.
\item[$\iota\iota\iota.$] It is simple to see that both $\mathfrak{g}_1$ and $\mathfrak{g}_2$ are special K\"ahler Lie subalgebras of the twisted cartesian product.
\item[$\iota\nu.$] If the signature of the scalars product $k_1$ and $k_2$ are $(p_1,q_1)$ and $(p_2,q_2)$, respectively, then the signature of $k$ is $(p_1+p_2,q_1+q_2)$.
\end{enumerate} \end{remark}
\begin{corollary}
Let $(G,\omega,J,\nabla)$ be a special K\"ahler Lie group whose Lie algebra is obtained as the twisted cartesian product of the Lie algebras of two special K\"ahler Lie groups $(G_1,\omega_1,J_1,\nabla_1)$ and $(G_2,\omega_2,J_2,\nabla_2)$ according to the representations $(\theta,\rho)$. Then $\nabla J=0$ if and only if both $\nabla_1 J_1=0$ and $\nabla_2 J_2=0$. \end{corollary} \begin{proof}
Let $\widetilde{L}$ be the linear representation induced by the product $\cdot$ given in \eqref{TwistedProduct}. The result is a straightforward computation that follows from checking what happens when $j=j_1+j_2$ and $\widetilde{L}_{x_1+x_2}$ commute for all $x_1\in\mathfrak{g}_1$ and $x_2\in\mathfrak{g}_2$. \end{proof}
Let us now introduce a ``double reduction'' process for a special K\"ahler Lie algebra which admits a \emph{complex and non-degenerate} left ideal, that is, a left ideal $I$ of $(\mathfrak{g},\cdot)$ such that $j(I)=I$ and $I$ is symplectic what means that $\omega|_{I\times I}$ is non-degenerate. This motivates the construction that we named twisted cartesian product. \begin{theorem}\label{TwistedDouble}
Let $(\mathfrak{g},\omega,j,\cdot)$ be a special K\"ahler Lie algebra which admits a complex and non-degenerate left ideal $I$. Then there exist two special K\"ahler Lie algebras $\mathfrak{g}_1$ and $\mathfrak{g}_2$ together with two Lie algebra representations $\theta:\mathfrak{g}_1\to \mathfrak{kl}(\mathfrak{g}_2,\omega_2,j_2)$ and $\rho:\mathfrak{g}_2\to \mathfrak{kl}(\mathfrak{g}_1,\omega_1,j_1)$ such that the special K\"ahler structure of $\mathfrak{g}$ can be obtained as the twisted cartesian product of $\mathfrak{g}_1$ and $\mathfrak{g}_2$ according to $(\theta,\rho)$. \end{theorem} \begin{proof}
Let us prove that $\mathfrak{g}_1=I$ and $\mathfrak{g}_2=I^{\perp_{\omega}}$ both of them equipped with the special K\"ahler Lie algebra structure of $\mathfrak{g}$ restricted. As $I$ is a left ideal of $(\mathfrak{g},\cdot)$, we have that the following relation holds true
$$\omega(x\cdot x_2,x_1)=-\omega(x_2,x\cdot x_1)=0,$$
for all $x\in\mathfrak{g}$, $x_1\in I$, and $x_2\in I^{\perp_{\omega}}$. Thus $x\cdot x_2\in I^{\perp_{\omega}}$ which implies that $I^{\perp_{\omega}}$ is a left ideal of $(\mathfrak{g},\cdot)$ as well. Because the restriction $\omega|_{I\times I}$ is non-degenerate, it is simple to see that $I\cap I^{\perp_{\omega}}=\{0\}$ which means that $\mathfrak{g}=I\oplus I^{\perp_{\omega}}$. Moreover, the fact that the Lie bracket of $\mathfrak{g}$ is given as the commutator of $\cdot$ implies that both $I$ and $I^{\perp_{\omega}}$ are Lie subalgebras of $\mathfrak{g}$.
Let $L_{x_1}(y_1)=x_1\cdot y_1\in I$ and $L'_{x_2}(y_2)=x_2\cdot y_2$ denote the restriction of the product $\cdot$ to $I$ and $I^{\perp_{\omega}}$, respectively. The properties that we will write below for $I$ with $L:I\to \mathfrak{gl}(I)$ defined by $L_{x_1}(y_1)=x_1\cdot y_1$ for all $x_1, y_1\in I$ are also true for $I^{\perp_{\omega}}$ with $L':I^{\perp_{\omega}}\to \mathfrak{gl}(I^{\perp_{\omega}})$ defined by $L'_{x_2}(y_2)=x_2\cdot y_2$ for all $x_2, y_2\in I^{\perp_{\omega}}$.
\begin{enumerate}
\item[$\iota.$] Because $\cdot$ is a left symmetric product compatible with the Lie algebra structure of $\mathfrak{g}$ and $I$ is a left ideal, the map $L:I\to \mathfrak{gl}(I)$ is a Lie algebra representation. Moreover, the facts that $\cdot$ is symplectic with respect to $\omega$ and $j\in Z_L^1(\mathfrak{g},\mathfrak{g})$ respectively implies that
$$\omega_1(L_{x_1}(y_1),z_1)+\omega_1(y_1,L_{x_1}(z_1))=0\qquad\textnormal{and}\qquad j_1\in Z_L^1(I,I),$$
for all $x_1, y_1, z_1\in I$. Here $\omega_1=\omega|_{I\times I}$ and $j_1=j|_{I}$. For the case of $I^{\perp_{\omega}}$ we set $\omega_2=\omega|_{I^{\perp_{\omega}}\times I^{\perp_{\omega}}}$ and $j_2=j|_{I^{\perp_{\omega}}}$.
\item[$\iota\iota.$] Let us now define $\theta:I\to \mathfrak{gl}(I^{\perp_{\omega}})$ and $\rho:I^{\perp_{\omega}}\to\mathfrak{gl}(I)$ as $\theta(x_1)(x_2)=x_1\cdot x_2$ and $\rho(x_2)(x_1)=x_2\cdot x_1$ for all $x_1\in I$ and $x_2\in I^{\perp_{\omega}}$, respectively. On the one hand, as $\cdot$ is a left symmetric product we have
\begin{eqnarray*}
(x_1\cdot y_1-y_1\cdot x_1)\cdot x_2 & = & [x_1,y_1]\cdot x_2\\
& = & x_1 \cdot(y_1\cdot x_2)-y_1\cdot(x_1\cdot x_2)\\
& = & \theta(x_1)(\theta(y_1)(x_2))-\theta(y_1)(\theta(x_1)(x_2)).
\end{eqnarray*}
On the other hand, the fact that $\cdot$ is symplectic with respect to $\omega$ implies
$$0=\omega_2(x_1\cdot x_2,y_2)+\omega_2( x_2,x_1\cdot y_2)=\omega_2(\theta(x_1)(x_2),y_2)+\omega_2(x_2,\theta(x_1)(y_2)).$$
In other words, $\theta:I\to \mathfrak{sp}(I^{\perp_{\omega}},\omega_2)$ is a Lie algebra representation. From the equalities
$$[x_2,y_2]\cdot x_1= x_2 \cdot(y_2\cdot x_1)-y_2\cdot(x_2\cdot x_1)\quad\textnormal{and}\quad \omega_1(x_2\cdot x_1,y_1)+\omega_1(x_1,x_2\cdot y_1)=0,$$
it follows that $\rho:I^{\perp_{\omega}}\to \mathfrak{sp}(I,\omega_1)$ is a Lie algebra representation as well.
\item[$\iota\iota\iota.$] The identities
$$x_1\cdot(x_2\cdot y_1)-x_2\cdot(x_1\cdot y_1)=(x_1\cdot x_2)\cdot y_1-(x_2\cdot x_1)\cdot y_1,\qquad\textnormal{and}$$
$$x_1\cdot(x_2\cdot y_2)-x_2\cdot(x_1\cdot y_2)=(x_1\cdot x_2)\cdot y_2-(x_2\cdot x_1)\cdot y_2,$$
must be satisfied for all $x_1,y_1\in I$ and $x_2,y_2\in I^{\perp_{\omega}}$. This is equivalent to require that the representations $\theta:I\to \mathfrak{sp}(I^{\perp_{\omega}},\omega_2)$ and $\rho:I^{\perp_{\omega}}\to \mathfrak{sp}(I,\omega_1)$ verify the equations
\begin{equation*}
L_{x_1}\circ\rho(x_2)-\rho(x_2)\circ L_{x_1}=\rho(\theta(x_1)(x_2))-L_{\rho(x_2)(x_1)},\qquad\textnormal{and}
\end{equation*}
\begin{equation*}
L'_{x_2}\circ\theta(x_1)-\theta(x_1)\circ L'_{x_2}=\theta(\rho(x_2)(x_1))-L'_{\theta(x_1)(x_2)}.
\end{equation*}
\item[$\iota\nu.$] Finally, as $j\in Z_L^1(\mathfrak{g},\mathfrak{g})$ we get that
\begin{eqnarray*}
\theta(x_1)(j_2(x_2))-\rho(x_2)(j_1(x_1)) & = & x_1 \cdot j_2(x_2)-x_2\cdot j_1(x_1)\\
& = & x_1 \cdot j(x_2)-x_2\cdot j(x_1)\\
& = & j([x_1,x_2])\\
& = & j(x_1 \cdot x_2)-j(x_2\cdot x_1)\\
& = & j_2(\theta(x_1)(x_2))-j_1(\rho(x_2)(x_1)).
\end{eqnarray*}
It happens if and only if $\theta(x_1)\circ j_2=j_2\circ \theta(x_1)$ and $\rho(x_2)\circ j_1=j_1\circ \rho(x_2)$ for all $x_1\in I$ and $x_2\in I^{\perp_{\omega}}$. Therefore, $\theta:I\to \mathfrak{kl}(I^{\perp_{\omega}},\omega_2,j_2)$ and $\rho:I^{\perp_{\omega}}\to \mathfrak{kl}(I,\omega_1,j_1)$.
\end{enumerate}
Clearly $\omega_1$ and $\omega_2$ are non-degenerate scalar 2-cocycles, and, $j_1$ and $j_2$ are integrable complex structures. Hence, both $(I,\omega_1,j_1,\cdot)$ and $(I^{\perp_{\omega}},\omega_2,j_2,\cdot')$ are special K\"ahler Lie algebras and $(\mathfrak{g},\omega,j,\cdot)$ is actually the twisted cartesian product of $I$ and $I^{\perp_{\omega}}$ according to the Lie algebra representations $(\theta,\rho)$ defined above. \end{proof}
\begin{corollary} If in the hypothesis of Theorem \ref{TwistedDouble} we assume that $I$ is a bilateral ideal of $(\mathfrak{g},\cdot)$, then $\theta=0$ which means that $\mathfrak{g}$ is obtained as the semi-direct product of $\mathfrak{g}_2$ with $\mathfrak{g}_1$ by means $\rho$. \end{corollary} \begin{proof} As $I$ is a bilateral ideal we get that $\theta(x_1)(x_2)=x_1\cdot x_2\in I$ for all $x_1\in I$ and $x_2\in I^{\perp_{\omega}}$ and hence $\omega_2(\theta(x_1)(x_2),y_2)=0$ for all $y_2\in I^{\perp_{\omega}}$. Therefore, as $\omega_2$ is non-degenerate we deduce that $\theta(x_1)(x_2)=0$ for all $x_1\in I$ and $x_2\in I^{\perp_{\omega}}$. That is, $\theta=0$. \end{proof}
\begin{example}\label{GoodExample}
Consider the special K\"ahler Lie algebra $\mathfrak{g}_3$ in dimension $4$ associated to the special K\"ahler Lie group $G_3$ given in Example \ref{KeyExample1}. Here we set $\mathfrak{g}_3\cong \textnormal{Vect}_\mathbb{R}\lbrace e_1,e_2,e_3,e_4\rbrace$ with nonzero Lie brackets
$$[e_1,e_2]=[e_1,e_4]=[e_3,e_2]=[e_3,e_4]=e_2-e_4.$$
On this Lie algebra we have the structure of special K\"ahler Lie algebra:
\begin{enumerate}
\item[$\iota.$] symplectic form $\omega=e_1^\ast \wedge e_2^\ast - e_3^\ast\wedge e_4^\ast$,
\item[$\iota\iota.$] integrable complex structure $j(e_1)=e_2$ and $j(e_3)=e_4$, and
\item[$\iota\iota\iota.$] left symmetric product
\begin{center}
\begin{tabular}{c|c|c|c|c}
$\overline{\cdot}$ & $e_1$ & $e_2$ & $e_3$ & $e_4$ \\
\hline
$e_1$ & $-e_1+e_3$ & $e_2-e_4$ & $-e_1+e_3$ & $e_2-e_4$ \\
\hline
$e_2$ & $0$ & $2e_1-2e_3$ & $0$ & $2e_1-2e_3$ \\
\hline
$e_3$ & $-e_1+e_3$ & $e_2-e_4$ & $-e_1+e_3$ & $e_2-e_4$ \\
\hline
$e_4$ & $0$ & $2e_1-2e_3$ & $0$ & $2e_1-2e_3$ \\
\end{tabular}
\end{center}
\end{enumerate}
Let us now consider the model space of special K\"ahler Lie group $((\mathbb{R}^{2n},+),\omega_0,J_0,\nabla^0)$. Note that the special K\"ahler Lie algebra associated to this Lie group is $(\mathbb{R}^{2n},\omega_0,J_0,\cdot^0)$ where $x\cdot^0 y=0$ since $\nabla^0_{\partial_i}{\partial_j}=0$.
A straightforward computation allows us to get that an element $D\in\mathfrak{sp}(\mathfrak{g}_3,\omega)$ such that $[D,j]=0$ has the form:
$$D= \left(
\begin{array}{cccc}
0 & a & b & c\\
-a & 0 & -c & b\\
b & -c & 0 & d\\
c & b & -d & 0
\end{array}
\right),\qquad a,b,c,d\in\mathbb{R}.$$
On the one hand, let $T:\mathbb{R}^{2n}\to \mathbb{R}$ be a linear transformation and define the linear Lie algebra representation $\rho: \mathbb{R}^{2n}\to \mathfrak{kl}(\mathfrak{g}_3,\omega,j)$ as $\rho(x)=T(x)D$ for $D$ as fixed above. On the other hand, let $\theta:\mathfrak{g}_3\to \mathfrak{kl}(\mathbb{R}^{2n},\omega_0,J_0)$ be the trivial representation; $\theta=0$. To have that the product \eqref{TwistedProduct} is a left symmetric product on $\mathfrak{g}=\mathfrak{g}_3\oplus \mathbb{R}^{2n}$ we just need to find the required conditions for which the equation \eqref{Twisted1} holds true. Namely,
\begin{equation}\label{ExampleTwisted}
T(x_2)(L_{x_1}\circ D-D\circ L_{x_1})=-T(x_2)L_{D(x_1)}.
\end{equation}
Here $L$ is the linear representation induced by the left symmetric product $\overline{\cdot}$. If $x_2\in \textnormal{ker}(T)$, then the identity \eqref{ExampleTwisted} trivially holds. However, if $x_2\notin \textnormal{ker}(T)$ and $x_1=e_1$, then with a straightforward computation we get that \eqref{ExampleTwisted} is true if and only if
$$D= \left(
\begin{array}{cccc}
0 & a & 0 & a\\
-a & 0 & -a & 0\\
0 & -a & 0 & -a\\
a & 0 & a & 0
\end{array}
\right),\qquad a\in\mathbb{R}.$$
If $D$ takes this form, then it is simple to check that the identity \eqref{ExampleTwisted} always holds for $x_1\in\{e_2,e_3,e_4\}$. Therefore, the vector space $\mathfrak{g}=\mathfrak{g}_3\oplus \mathbb{R}^{2n}$ has a non-trivial structure of special K\"ahler Lie algebra given by
\begin{enumerate}
\item[$\iota.$] the nonzero Lie brackets:
$$[e_1,e_2]=[e_1,e_4]=[e_3,e_2]=[e_3,e_4]=e_2-e_4\qquad\textnormal{and}\qquad [e_j,\hat{e}_k]=-T(\hat{e}_k)D(e_j)$$
for all $j=1,\cdots,4$ and $k=1,2,\cdots, 2n$
\item[$\iota\iota.$] the symplectic form $\displaystyle \widetilde{\omega}=e_1^\ast \wedge e_2^\ast - e_3^\ast\wedge e_4^\ast+\sum_{k=1}^{n}\hat{e}_k^\ast\wedge \hat{e}_{n+k}^\ast$,
\item[$\iota\iota\iota.$] the integrable complex structure $\widetilde{j}(e_1)=e_2$, $\widetilde{j}(e_3)=e_4$, and $\widetilde{j}(\hat{e}_k)=\hat{e}_{n+k}$ for all $k=1,2,\cdots, n$; and
\item[$\iota\nu.$] the left symmetric product
\begin{center}
\begin{tabular}{c|c|c|c|c|c}
$\cdot$ & $e_1$ & $e_2$ & $e_3$ & $e_4$ & $\hat{e}_j$ \\
\hline
$e_1$ & $-e_1+e_3$ & $e_2-e_4$ & $-e_1+e_3$ & $e_2-e_4$ & $0$ \\
\hline
$e_2$ & $0$ & $2e_1-2e_3$ & $0$ & $2e_1-2e_3$ & $0$ \\
\hline
$e_3$ & $-e_1+e_3$ & $e_2-e_4$ & $-e_1+e_3$ & $e_2-e_4$ & $0$\\
\hline
$e_4$ & $0$ & $2e_1-2e_3$ & $0$ & $2e_1-2e_3$ & $0$ \\
\hline
$\hat{e}_k$ & $T(\hat{e}_k)D(e_1)$ & $T(\hat{e}_k)D(e_2)$ & $T(\hat{e}_k)D(e_3)$ & $T(\hat{e}_k)D(e_4)$ & $0$ \\
\end{tabular}
\end{center}
for all $j,k=1,2,\cdots, 2n$.
\end{enumerate} Since $\widetilde{L}_{e_1}\circ \widetilde{j}\neq \widetilde{j}\circ \widetilde{L}_{e_1}$ we have that the left invariant flat affine symplectic connection $\widetilde{\nabla}$ determined by $\cdot$ satisfies $\widetilde{\nabla}\widetilde{J}\neq 0$. Finally, the signature of the scalar product on $\mathfrak{g}$ induced by $(\widetilde{\omega},\widetilde{j})$ is $(2,2n+2)$. \end{example}
\section{A double extension of a special K\"ahler Lie algebra} In \cite{V} was introduced a method for constructing flat affine symplectic Lie algebras of dimension $2n+2$ starting from a flat affine symplectic Lie algebra of dimension $2n$. This method uses the cohomology for left symmetric algebras developed in \cite{N} and it is called a \emph{double extension of a flat affine symplectic Lie algebra}. In this section we will use such a construction for inducing an easy way of getting special K\"ahler Lie algebras.
For the purposes of this section let us assume that $(\mathfrak{g},\omega,j,\cdot)$ is a special K\"ahler Lie algebra such that the scalar product $k$ induced by $(\omega,j)$ is positive definite. In the same spirit of \cite{DM} and \cite{V} we may find a way of obtaining a reduction process of a special K\"ahler Lie algebra structure. \begin{lemma}[Reduction]\label{reduction} Suppose that $(\mathfrak{g},\omega,j,\cdot)$ is a special K\"ahler Lie algebra. Let $I$ be totally isotropic bilateral ideal of $(\mathfrak{g},\omega,\cdot)$. Then \begin{enumerate}
\item[$\iota.$] The product $\cdot$ in $I$ is null, $I\cdot I^{\perp_\omega}=0$, and $ I^{\perp_\omega}$ is a left ideal.
\item[$\iota\iota.$] $ I^{\perp_\omega}$ is a right ideal if and only if, $ I^{\perp_\omega}\cdot I=0$.
\item[$\iota\iota\iota.$] If $ I^{\perp_\omega}$ is a bilateral ideal of $(\mathfrak{g},\cdot)$, then the canonical sequences
\begin{equation}\label{secu1}
0\longrightarrow I\hookrightarrow I^{\perp_\omega}\longrightarrow I^{\perp_\omega}/ I=B\longrightarrow 0,
\end{equation}
\begin{equation}\label{secu3}
0\longrightarrow I\hookrightarrow \mathfrak{g}\longrightarrow \mathfrak{g}/ I\longrightarrow 0,
\end{equation}
are sequences of left symmetric algebras. Moreover, the quotient Lie algebra $B=I^{\perp_\omega}/I$ admits a canonical structure of special K\"ahler Lie algebra. \end{enumerate} \end{lemma} \begin{proof} Items $\iota.$ and $\iota\iota.$ are straightforward computations; see for instance \cite{V}. Let us verify $\iota\iota\iota.$ with more details. Suppose that $I^{\perp_\omega}$ is a bilateral ideal of $(\mathfrak{g},\cdot)$. Given that $I\subset I^{\perp_\omega}$ the fact that $\omega$ is a non-degenerate scalar 2-cocycle implies that $I$ is Abelian. Thus, the quotient vector space $B=I^{\perp_\omega}/I$ inherits a natural structure of symplectic Lie algebra passing to the quotient; see \cite{DM}. More precisely, $B$ has a structure of Lie algebra given by $$[x+I,y+I]=[x,y]+I=(x\cdot y-y\cdot x)+I,\qquad x,y\in I^{\perp_\omega}.$$ As $\omega$ is a nondegenerate scalar 2-cocycle, it induces on $I^{\perp_\omega}$ a bilinear form of radical $I$. Hence, we have that $\left.\omega\right\vert_{I^{\perp_\omega}\times I^{\perp_\omega}}$ defines, passing to the quotient, a nondegenerate and skew-symmetric bilinear form $\omega'$ on $B=I^{\perp_\omega}/I$ which is also a scalar 2-cocycle of the Lie algebra $B$. This symplectic form is given by $\omega'(x+I,y+I)=\omega(x,y)$ for all $x,y\in I^{\perp_\omega}$.
If we denote the class of $x\in I^{\perp_\omega}$ module $I$ by $\overline{x}=x+I$, then the left symmetric product $$\overline{x}\cdot\overline{y}=(x+I)\cdot (y+I)=x\cdot y+I=\overline{x\cdot y},$$ satisfies $$\omega'(\overline{x}\cdot\overline{y},\overline{z})+\omega'(\overline{y},\overline{x}\cdot\overline{z})=\omega'(\overline{x\cdot y},\overline{z})+\omega'(\overline{y},\overline{x\cdot z})=\omega(x\cdot y,z)+\omega(y,x\cdot z)=0,$$ for all $x,y,z\in I^{\perp_\omega}$.
It is worth noticing that as the scalar product $k$ on $\mathfrak{g}$ induced by $(\omega,j)$ is positive definite then for each $a\in I\backslash\lbrace 0\rbrace$ we get that $k(a,a)=\omega(a,j(a))>0$ so that $j(a)\notin I^{\perp_\omega}$. In consequence, it follows that $I^{\perp_\omega}\cap j(I)=\lbrace 0\rbrace$ and more importantly to us we obtain that $(I^{\perp_\omega}\cap j(I)^{\perp_\omega})\oplus I=I^{\perp_\omega}$ where $I^{\perp_\omega}\cap j(I)^{\perp_\omega}$ is a subspace in $I^{\perp_\omega}$ invariant by $j$; visit \cite{DM} for further details. This implies that $j$ is well restricted to $I^{\perp_\omega}\cap j(I)^{\perp_\omega}$ which in turn may be identified with $B$. After assuming this identification we find a unique way of viewing each class $\overline{x}=x+I$ with $x\in I^{\perp_\omega}\cap j(I)^{\perp_\omega}$, thus obtaining that the map $j':B\to B$ given as $j'(\overline{x})=j(x)+I=\overline{j(x)}$ defines an integrable complex structure on $B$ such that $(B,\omega',j')$ is a K\"ahler Lie algebra. Moreover, we get that $$j'(\overline{[x,y]})=\overline{j([x,y])}=\overline{x\cdot j(y)-y\cdot j(x)}=\overline{x\cdot j(y)}-\overline{y\cdot j(x)}=\overline{x}\cdot \overline{j(y)}-\overline{y} \cdot \overline{j(x)}.$$ In conclusion, the triple $(B,\omega',j',\overline{\cdot})$ is a special K\"ahler Lie algebra.
\end{proof}
\begin{Assumption} Assume that both $I$ and $I^{\perp_\omega}$ are bilateral ideals of $(\mathfrak{g},\cdot)$ with $\dim I=1$. \end{Assumption} It is clear that if $\dim I=1$, then we are in the conditions of Lemma \ref{reduction}. Let $B=I^{\perp_\omega}/I$ denote the special K\"ahler Lie algebra obtained from the reduction. Suppose that $I=\mathbb{R}e$ with $k(e,e)=1$ and set $d=j(e)$ so that $\mathbb{R}d$ is a 1-dimensional subspace in $\mathfrak{g}$ such that $\omega(e,d)=1$. As $B\approx I^{\perp_\omega}\cap j(I)^{\perp_\omega}$ is invariant by $j$ and $j(I)=\mathbb{R}d$ then we can identify $\mathfrak{g}\approx \mathbb{R}e\oplus B\oplus \mathbb{R}d$. From now on, the left symmetric products of $B$ and $\mathfrak{g}$ will be denoted by $\cdot$ and $\diamond$, respectively. Also, if $A\in \mathfrak{gl}(B)$ then we denote by $A^\ast$ the adjoint map with respect to $\omega'$ associated to $A$, that is, $A^\ast\in \mathfrak{gl}(B)$ verifies $\omega'(A(x),y)=\omega'(x,A^\ast(y))$ for all $x,y\in B$. Under the previous identifications we have the following facts. See \cite{V} for more details about $\iota.$, $\iota\iota.$, and $\iota\iota\iota.$ stated below. \begin{enumerate}
\item[$\iota.$] Lie algebra structure of $\mathfrak{g}$:
\begin{eqnarray}\label{bracketdoble2}
& & [d,e]=\mu e\nonumber\\
& & [d,x]=\omega'(z_0,x)e+D(x)\\
& & [x,y]=\omega'((u+u^*)(x),y)e+[x,y]_B\nonumber,\qquad x,y\in B.
\end{eqnarray}
\item[$\iota\iota.$] Non-degenerate scalar 2-cocycle $\omega$:
$$\omega|_B=\omega'\qquad \omega(e,d)=1\qquad\textnormal{and}\qquad \textnormal{Vect}_\mathbb{R}\lbrace e,d\rbrace \perp_{\omega} B.$$
\item[$\iota\iota\iota.$] Left symmetric product $\diamond$ compatible with the Lie algebra structure of $\mathfrak{g}$ and symplectic with respect to $\omega$:
\begin{eqnarray}\label{productdoble2}
& & e\diamond x=x\diamond e=e\diamond e=0\nonumber\\
& & x\diamond y=\omega'(u(x),y)e+x\cdot y\nonumber\\
& & d\diamond x=\omega'(x_0,x)e+(D+u)(x)\nonumber\\
& & x\diamond d=\omega'(x_0-z_0,x)e+u(x)\\
& & d\diamond e=\lambda e\nonumber\\
& & e\diamond d=(\lambda-\mu)e\nonumber\\
& & d\diamond d=\beta e+x_0-\lambda d\nonumber,
\end{eqnarray} \end{enumerate} where $\lambda,\mu,\beta\in\mathbb{R}$, $x_0,z_0\in B$, $D\in\mathfrak{gl}(B)$, and $u\in Z_{L}^1(B,B)$ such that $D+u\in\mathfrak{sp}(B,\omega')$. All these parameters must verify the following algebraic conditions: \begin{enumerate}
\item $\lambda=\mu$ or $\lambda=\dfrac{\mu}{2}$.
\item $[u,D]_{\mathfrak{gl}(B)}=u^2+\lambda u-R_{x_0}$.
\item $D^\ast(x_0-z_0)-2u^\ast(x_0)-2\lambda(x_0-z_0)+(\lambda-\mu)z_0=0$.
\item $D(x)\cdot y+x\cdot D(y)-D(x\cdot y)=u(x\cdot y)-x\cdot u(y)$.
\item $(\lambda-\mu)(u+u^\ast)(x)-2(u\circ u^\ast)(x)=(L_x+R_x^\ast)(x_0-z_0)$. \end{enumerate} Here $R_x:B\to B$ is defined as $R_x(y)=L_y(x)=y\cdot x$ for all $x,y\in B$. For more details see \cite{V}.
Let us now see what happens with the integrable complex structure $j$. Note that because of the identifications we have done above we know that $j$ satisfies \begin{equation}\label{ComplexStructureExtended}
j|_B=j'\quad\textnormal{and}\quad j(e)=d, \end{equation} so that $j(d)=-e$. To look at the integrability of $j$ with respect to the Lie bracket \eqref{bracketdoble2} we need to analyze all possible cases: \begin{itemize} \item the equality $[j(e),j(d)]-[e,d]=j[j(e),d]+j[e,j(d)]$ always holds. \item The identity $[j(e),j(x)]-[e,x]=j[j(e),x]+j[e,j(x)]$ holds if and only if $$\omega'(z_0,x)=\omega'(z_0,j'(x))=0\quad\textnormal{and}\qquad [D,j'](x)=0,\qquad x\in B.$$ That is, $[D,j']=0$ and because $\omega'$ is non-degenerate, we also have $z_0=0$. \item Analogously, the equality $[j(d),j(x)]-[d,y]=j[j(d),y]+j[d,j(x)]$ is satisfied if and only if $z_0=0$ and $[D,j']=0$. \item Finally, given that $j'$ is integrable, the identity $[j(x),j(y)]-[x,y]=j[j(x),y]+j[x,j(y)]$ is true if and only if $$\omega'((u+u^\ast)(j'(x)),j'(y))=\omega'((u+u^*)(x),y)\qquad\textnormal{and}$$ $$\omega'((u+u^\ast)(j'(x)),y)=-\omega'((u+u^*)(x),j'(y)),$$ for all $x,y\in B$. As we have that $j'\in \mathfrak{sp}(B,\omega')$ and $\omega'$ is non-degenerate, the previous conditions are verified if and only if $[u+u^\ast,j']=0$. \end{itemize} Let us now look at the condition $j\in Z_{L}^1(\mathfrak{g},\mathfrak{g})$, where $L$ denotes the linear representation determined by the left symmetric product $\diamond$ given in \eqref{productdoble2}. \begin{itemize} \item The equality $j([e,d])=e\diamond j(d)-d\diamond j(e)$ holds if and only if $-\mu d=-(\beta e+x_0-\lambda d)$. This implies that $\beta=0$, $x_0=0$, and $\lambda=-\mu$. Note that a strong condition in the double extension of a flat affine symplectic Lie algebra is $\lambda=\mu$ or $\lambda=\dfrac{\mu}{2}$. Thus $\lambda=\mu=0$ as well. \item The identity $j([e,x])=e\diamond j(x)-x\diamond j(e)$ is satisfied if and only if $x\diamond d=0$ for all $x\in B$. As $z_0=x_0=0$ we get that $u=0$. \item Given that $z_0=x_0=0$ and $u=0$, the equality $j([d,x])=d\diamond j(x)-x\diamond j(d)$ is true if and only if $(j'\circ D)(x)=(D\circ j')(x)$ for all $x\in B$. So, $[D,j']=0$. \item Note that as consequence of the algebraic condition (4) presented above it follows that $D$ must be a derivation with respect to the left symmetric product since $u=0$. \item Finally, because $j'\in Z_L^1(B,B)$, the identity $j([x,y])=x\diamond j(y)-y\diamond j(y)$ always holds. \end{itemize} Summing up, we have the following results: \begin{proposition}\label{PositiveResult}
Let $(\mathfrak{g},\omega,j,\cdot)$ be a special K\"ahler Lie algebra. Assume that $I=\mathbb{R}e$ with $k(e,e)=1$ is a $1$-dimensional subspace in $\mathfrak{g}$ such that both $I$ and $I^{\perp_\omega}$ are bilateral ideals of $(\mathfrak{g},\cdot)$. If $B=I^{\perp_\omega}/I$ denotes the special K\"ahler Lie algebra obtained from the reduction through $I$, then when setting $d=j(e)$ the left symmetric product of $\mathfrak{g}$ is given by
$$\diamond|_B=\cdot \qquad e\diamond \mathfrak{g}=\mathfrak{g}\diamond e=\mathfrak{g}\diamond d=d\diamond(\mathbb{R}e\oplus \mathbb{R}d) =\lbrace 0\rbrace\qquad\textnormal{and}\qquad d\diamond x=D(x),\qquad x\in B,$$ where $D\in \mathfrak{sp}(B,\omega')$ is a derivation of the left symmetric algebra $(B,\cdot)$ verifying $[D,j']=0$. \end{proposition} Reciprocally, we get a method for constructing special K\"ahler Lie algebras: \begin{theorem}\label{doubleKahler} Let $(\mathfrak{g},\omega,j,\cdot)$ be a special K\"ahler Lie algebra and let $D\in \mathfrak{sp}(\mathfrak{g},\omega)$ be a derivation of the left symmetric algebra $(\mathfrak{g},\cdot)$ verifying $[D,j]=0$. Then the vector space $\hat{\mathfrak{g}}:=\mathbb{R}e\oplus \mathfrak{g} \oplus \mathbb{R}d$ equipped with \begin{enumerate} \item[$\iota.$] the Lie bracket $[\cdot,\cdot]$:
$$[\cdot,\cdot]|_\mathfrak{g}=[\cdot,\cdot]_\mathfrak{g},\qquad e\in \mathfrak{z}(\hat{\mathfrak{g}})\qquad \textnormal{and}\qquad [d,x]=D(x),\qquad x\in\mathfrak{g}$$ \item[$\iota\iota.$] the non-degenerate scalar 2-cocycle $\widetilde{\omega}$:
$$\widetilde{\omega}|_\mathfrak{g}=\omega,\qquad \widetilde{\omega}(e,d)=1\qquad\textnormal{and}\qquad \textnormal{Vect}_\mathbb{R}\lbrace e,d\rbrace \perp_{\widetilde{\omega}} \mathfrak{g}$$ \item[$\iota\iota\iota.$] the left symmetric product $\diamond$:
$$\diamond|_\mathfrak{g}=\cdot \qquad e\diamond \hat{\mathfrak{g}}=\hat{\mathfrak{g}}\diamond e=\hat{\mathfrak{g}}\diamond d=d\diamond(\mathbb{R}e\oplus \mathbb{R}d) =\lbrace 0\rbrace\qquad\textnormal{and}\qquad d\diamond x=D(x),\qquad x\in\mathfrak{g}$$ \item[$\iota\nu.$] and the integrable complex structure $\widetilde{j}$:
$$\widetilde{j}|_\mathfrak{g}=j\qquad\textnormal{and}\qquad \widetilde{j}(e)=d ,$$ \end{enumerate} defines another special K\"ahler Lie algebra. \end{theorem} \begin{definition} The Lie algebra $(\hat{\mathfrak{g}},\widetilde{\omega},\widetilde{j},\diamond)$ obtained in Theorem \ref{doubleKahler} is called the \emph{double extension} of the special K\"ahler Lie algebra $(\mathfrak{g},\omega,j,\cdot)$ according to $D$. \end{definition} \begin{remark}
\begin{enumerate}
\item[$\iota.$] If $k$ denotes the scalar product on $\mathfrak{g}$ induced by $(\omega,j)$, then the scalar product $\widetilde{k}$ on the double extension $\hat{\mathfrak{g}}$ which is induced by $(\widetilde{\omega},\widetilde{j})$ can be seen as
$$\widetilde{k}=\left(
\begin{array}{ccc}k & & \\
& 1 &\\
& & 1
\end{array}
\right).$$
\item[$\iota\iota.$] The requirement of positive definiteness of $k$ it is completely necessary to prove the reduction procedure from Lemma \ref{reduction}. If we let the scalar product $k$ to have signature $(p,q)$ with both $p,q$ nonzero, then the reduction procedure and Proposition \ref{PositiveResult} are not true in general. However, if $(\mathfrak{g},\omega,j,\cdot)$ is a special K\"ahler Lie algebra where the scalar product $k$ induced by $(\omega,j)$ is not necessarily positive definite, then the double extension process stated in Theorem \ref{doubleKahler} is still true. What we need to do for proving this claim is to use the double extension process of a flat affine symplectic Lie algebra introduced in \cite{V} and extend the complex structure as in equation \eqref{ComplexStructureExtended}. The requirements of integrability and cohomology property of the complex structure extended are exactly the same that we got before. In this case, if the signature of $k$ is $(p,q)$, then the signature of $\widetilde{k}$ is $(p+2,q)$.
\end{enumerate} \end{remark}
\begin{corollary}
If $(G,\omega,J,\nabla)$ is a simply connected special K\"ahler Lie group whose Lie algebra is obtained as a double extension, then $G$ is identified with a Lie subgroup of $\mathfrak{g}\rtimes_{\textnormal{Id}}\textnormal{Sp}(\mathfrak{g},\omega_e)$ containing a nontrivial $1$-parameter subgroup formed by central translations. In particular, if $\nabla J=0$, then such a subgroup is contained in $\mathfrak{g}\rtimes_{\textnormal{Id}}\textnormal{KL}(\mathfrak{g},\omega_e,J_e)$. \end{corollary} \begin{proof} Let $(\mathfrak{g},\omega,j,\cdot)$ be the special K\"ahler Lie algebra associated to $(G,\omega,J,\nabla)$. If $\mathfrak{g}$ is obtained as a double extension of a special pseudo-K\"ahler Lie algebra $B$ according to $D$, then it decomposes as $\mathfrak{g}=\mathbb{R}e\oplus B\oplus \mathbb{R}d$ where $(\omega,j,\cdot)$ are given like $(\widetilde{\omega},\widetilde{j},\diamond)$ in Theorem \ref{doubleKahler}. As $e\diamond \mathfrak{g}=\lbrace 0\rbrace$, it is clear that $\textnormal{Ker}(L)\neq \lbrace 0\rbrace$ since $L_e=0$. Given that $G$ is simply connected, there exists a Lie group homomorphism $\rho: G\to \mathfrak{g}\rtimes_{\textnormal{id}}\textnormal{Sp}(\mathfrak{g},\omega)$ which is determined by the expression $$\rho(\exp_G(x))=\left(\sum_{m=1}^\infty \dfrac{1}{m!}(L_x)^{m-1}(x),\sum_{m=0}^\infty \dfrac{1}{m!}(L_x)^m\right).$$ See \cite{V} for more details about such a Lie group homomorphism. Therefore, as $L_e=0$, we have that $\rho$ determines a nontrivial $1$-parameter subgroup $H$ of $\rho(G)\approx G$ formed by central translations which is induced by $t\mapsto \textsf{exp}_G(te)$ and given by $$H=\left\lbrace \rho(\textsf{exp}_G(te))=(te,\textnormal{Id}_\mathfrak{g}):\ t\in\mathbb{R}\right\rbrace.$$ In particular, if $\nabla J=0$ then it follows from Theorem \ref{etale} that $H$ is actually contained in $\mathfrak{g}\rtimes_{\textnormal{Id}}\textnormal{KL}(\mathfrak{g},\omega,j)$. \end{proof} \begin{corollary}\label{NablaJ} Let $(G_1,\omega_1,J_1,\nabla_1)$ and $(G_2,\omega_2,J_2,\nabla_2)$ be two special K\"ahler Lie groups such that the Lie algebra of $G_1$ is obtained as a double extension from the Lie algebra of $G_2$ according with $D$. Then $\nabla J_1=0$ if and only if $\nabla J_2=0$. \end{corollary} \begin{proof} The result follows from the condition $[j_2,D]=0$. \end{proof}
It is well known that a left invariant flat affine symplectic connection on a connected Lie group is geodesically complete if and only if the group is unimodular (see for instance \cite{Ba,V}). So, the following result is clear: \begin{corollary} Let $(G_1,\omega_1,J_1,\nabla_1)$ and $(G_2,\omega_2,J_2,\nabla_2)$ be two special K\"ahler Lie groups such that the Lie algebra of $G_1$ is obtained as a double extension from the Lie algebra of $G_2$ according with $D$. Then $\nabla_1$ is geodesically complete if and only if $\nabla_2$ is geodesically complete and $\textnormal{tr}(D)=0$. \end{corollary}
\begin{corollary} Let $(G,\omega,J,\nabla)$ be a special K\"ahler Lie group such that $\nabla$ is bi-invariant. Then $G$ is nilpotent and its Lie algebra is obtained as a double extension starting from $\{0\}$. In particular, $\nabla J=0$. \end{corollary} \begin{proof} The fact that $G$ becomes nilpotent and its Lie algebra is obtained as a double extension starting from $\{0\}$ are two immediate consequences of Propositions 3.11 and 4.7 from \cite{V} and Theorem \ref{doubleKahler}. Given that, up to isomorphism, the only special K\"ahler Lie group of dimension $2$ is $((\mathbb{R}^{2},+),\omega_0,J_0,\nabla^0)$ and $\mathfrak{g}$ is obtained by a series of double extensions starting from $\{0\}$, we should pass by $\mathbb{R}^2$. Therefore, the fact that $\nabla^0 J_0=0$ and Corollary \ref{NablaJ} imply that $\nabla J=0$. \end{proof}
\begin{example} Using $((\mathbb{R}^{2n},+),\omega_0,J_0,\nabla^0)$, the model space of special K\"ahler Lie group, and the double extension process, we can easily get a generic example of non-Abelian special K\"ahler Lie group. Note that the special K\"ahler Lie algebra associated to this Lie group is $(\mathbb{R}^{2n},\omega_0,J_0,\cdot^0)$ where $x\cdot^0 y=0$ since $\nabla^0_{\partial_i}{\partial_j}=0$. If $D\in\mathfrak{sp}(\mathbb{R}^{2n},\omega_0)$ is such that $[D,J_0]=0$, then the vector space $\mathfrak{g}=\mathbb{R}e_{2n+2}\oplus \mathbb{R}^{2n}\oplus \mathbb{R}e_1\cong \mathbb{R}^{2n+2}$ admits a structure of special K\"ahler Lie algebra given by: \begin{enumerate}
\item[$\iota.$] the Lie bracket:
$$[e_1,e_{2n+2}]=[e_{2n+2},e_k]=[e_k,e_{i}]=0\qquad \textnormal{and}\qquad [e_{1},e_k]=D(e_k)$$
for all $k,i=2,\cdots, 2n+1$. Here $\lbrace e_1,\cdots, e_{2n+2}\rbrace$ denotes the canonical basis of $\mathbb{R}^{2n+2}$ and we are identifying $\mathbb{R}^{2n}$ with $\textnormal{Vect}_\mathbb{R}\lbrace e_2,\cdots, e_{2n+1}\rbrace$.
\item[$\iota\iota.$] the non-degenerate scalar 2-cocycle $\widetilde{\omega}$:
$$\widetilde{\omega}|_{\mathbb{R}^{2n}}=\omega_0,\qquad \widetilde{\omega}(e_{1},e_{2n+2})=-1\qquad\textnormal{and}\qquad \textnormal{Vect}_\mathbb{R}\lbrace e_{1},e_{2n+2}\rbrace \perp_{\widetilde{\omega}} \mathbb{R}^{2n}$$
\item[$\iota\iota\iota.$] the left symmetric product $\diamond$:
$$e_k \diamond e_{2n+2}=e_{2n+2}\diamond e_k= e_k\diamond e_i=e_k\diamond e_1=0 \qquad \textnormal{and}\qquad e_1\diamond e_k=D(e_k)$$
for all $k,i=2,\cdots, 2n+1$, and
\item[$\iota\nu.$] the integrable complex structure $\widetilde{j}$:
$$\widetilde{j}|_{\mathbb{R}^{2n}}=J_0\qquad\textnormal{and}\qquad \widetilde{j}(e_{2n+2})=e_1.$$ \end{enumerate} A Lie group with Lie algebra $\mathfrak{g}$ is $G=\mathbb{R}\ltimes_\rho \mathbb{R}^{2n+1}$ which is determined by the semi-direct product of $(\mathbb{R},+)$ with $(\mathbb{R}^{2n+1},+)$ by means of the Lie group homomorphism $\rho:\mathbb{R}\to \textnormal{GL}(\mathbb{R}^{2n+1})$ defined by $$\rho(t)=\left(\begin{array}{cc} \textnormal{Exp}(tD) & 0 \\ 0 & 1 \end{array} \right).$$ The product in $G$ is explicitly given as $$(t,x,u)\cdot(t',x',u)=(t+t',\textnormal{Exp}(tD)(x')+x,u+u').$$ Here $(x,u)$, with $x=(x_2,\cdots,x_{2n+1})\in\mathbb{R}^{2n}$, are the coordinates in $\mathbb{R}^{2n+1}$. A basis for the left invariant vector fields on $G$ are $$e_1^+=\dfrac{\partial}{\partial t},\qquad e_k^+=\textnormal{Exp}(tD)(e_k)\cdot \left( \dfrac{\partial}{\partial x_2},\cdots, \dfrac{\partial}{\partial x_{2n+1}}\right) \qquad \textnormal{and}\qquad e_{2n+2}^+=\dfrac{\partial}{\partial u},$$ for all $k=2,\cdots, 2n+1$.
It is clear that $J_0\in \mathfrak{sp}(\mathbb{R}^{2n},\omega_0)$. Thus, a particularly interesting choice for the double extension is $D=J_0$. The left invariant vector fields for this particular case are: $$e_1^+=\dfrac{\partial}{\partial t},\qquad e_k^+=\cos(t)\dfrac{\partial}{\partial x_{k}}+\sin(t)\dfrac{\partial}{\partial x_{n+k}},$$ $$e_{n+k}^+=-\sin(t)\dfrac{\partial}{\partial x_{k}}+\cos(t)\dfrac{\partial}{\partial x_{n+k}} \qquad \textnormal{and}\qquad e_{2n+2}^+=\dfrac{\partial}{\partial u}.$$ for all $k=2,\cdots, n+1$. Therefore, the left invariant special K\"ahler structure on $G=\mathbb{R}\ltimes_\rho \mathbb{R}^{2n+1}$ for the case $D=J_0$ is given by \begin{enumerate} \item[$\iota.$] the left invariant symplectic form $\displaystyle \omega=du\wedge dt+\sum_{k=2}^{n+1}dx_{k+n}\wedge dx_{k}$, \item[$\iota\iota.$] the left invariant complex structure $J(e_1^+)=-e_{2n+2}^+$, $J(e_k^+)=e_{n+k}^+$, and $J(e_{n+k}^+)=-e_k^+$, for all $k=2,\cdots, n+1$ and \item[$\iota\iota\iota.$] the left invariant flat affine symplectic connection $$\nabla_{e_1^+}e_k^+=e_{n+k}^+\qquad \nabla_{e_1^+}e_{n+k}^+=-e_k^+,\qquad \nabla_{e_1^+}e_{2n+2}^+=0$$ $$\nabla_{e_{2n+2}^+}=\nabla_{e_k^+}=\nabla_{e_{n+k}^+}=0$$ for all $k=2,\cdots, n+1$. \end{enumerate} Given that $\nabla^0$ is geodesically complete and $\textnormal{tr}(J_0)=0$, we obtain that $\nabla$ is geodesically complete as well. Moreover, as we have that $\nabla^0 J_0=0$ and $[J_0,J_0]=0$, we conclude that $\nabla J=0$. The metric on $G$ associated to $(\omega,J)$ is also Riemannian. \end{example}
We end this section by exhibiting a $1$-dimensional family of left invariant special K\"ahler structures in dimension $6$ with associated metric having signature $(4,2)$ and verifying $\nabla J\neq 0$. \begin{example} Consider the special K\"ahler Lie algebra $\mathfrak{g}_3$ in dimension $4$ associated to the special K\"ahler Lie group $G_3$ given in Example \ref{KeyExample1} and whose special K\"ahler structure was described at the beginning of Example \ref{GoodExample}.
Recall that an element $D\in\mathfrak{sp}(\mathfrak{g}_3,\omega)$ such that $[D,j]=0$ has the form: $$D= \left( \begin{array}{cccc} 0 & a & b & c\\ -a & 0 & -c & b\\ b & -c & 0 & d\\ c & b & -d & 0 \end{array} \right),\qquad a,b,c,d\in\mathbb{R}.$$ A straightforward computation allows us to deduce that $D(e_1\cdot e_1)=D(e_1)\cdot e_1+e_1\cdot D(e_1)$ if and only if $b=0$, $c=a$, and $d=-a$. Furthermore, if $D_a$ denotes the matrix above after replacing the previous equalities then it is simple to check that this always defines a derivation for the left symmetric product on $\mathfrak{g}_3$. Therefore, we get a $1$-dimensional family of special K\"ahler Lie algebras $\mathfrak{g}_a$ of dimension $6$ paremetrized by $a\in \mathbb{R}$ which are obtained as a double extension from $\mathfrak{g}_3$ according to $D_a$. If $\mathfrak{g}_a\cong \textnormal{Vect}_\mathbb{R}\lbrace e, e_1,e_2,e_3,e_4,d\rbrace$ then this is equipped with
\begin{enumerate} \item[$\iota.$] nonzero Lie brackets $[e_1,e_2]=[e_1,e_4]=[e_3,e_2]=[e_3,e_4]=e_2-e_4$, and $[d,e_i]=D_a(e_i)$ for all $i=1,\cdots,4$, \item[$\iota\iota.$] symplectic form $\widetilde{\omega}=e^\ast \wedge d^\ast+e_1^\ast \wedge e_2^\ast - e_3^\ast\wedge e_4^\ast$, \item[$\iota\iota\iota.$] integrable complex structure $\widetilde{j}(e)=d$, $\widetilde{j}(e_1)=e_2$, and $\widetilde{j}(e_3)=e_4$; and \item[$\iota\nu.$] left symmetric product \begin{center}
\begin{tabular}{c|c|c|c|c|c|c} $\diamond_a$ & $e$ & $e_1$ & $e_2$ & $e_3$ & $e_4$ & $d$ \\ \hline $e$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ \hline $e_1$ & $0$ & $-e_1+e_3$ & $e_2-e_4$ & $-e_1+e_3$ & $e_2-e_4$ & $0$ \\ \hline $e_2$ & $0$ & $0$ & $2e_1-2e_3$ & $0$ & $2e_1-2e_3$ & $0$ \\ \hline $e_3$ & $0$ & $-e_1+e_3$ & $e_2-e_4$ & $-e_1+e_3$ & $e_2-e_4$ & $0$\\ \hline $e_4$ & $0$ & $0$ & $2e_1-2e_3$ & $0$ & $2e_1-2e_3$ & $0$ \\ \hline $d$ & $0$ & $-ae_2+ae_4$ & $ae_1-ae_3$ & $-ae_2+ae_4$ & $ae_1-ae_3$ & $0$ \\ \end{tabular} \end{center} \end{enumerate} Since $\widetilde{L}^a_{e_1}\circ \widetilde{j}\neq \widetilde{j}\circ \widetilde{L}^a_{e_1}$ we have that the left invariant flat affine symplectic connection $\widetilde{\nabla}^a$ determined by $\diamond_a$ satisfies $\widetilde{\nabla}^a\widetilde{J}\neq 0$. Moreover, given that $\mathfrak{g}_3$ is unimodular and $\textnormal{tr}(D_a)=0$, we obtain that $\widetilde{\nabla}^a$ is geodesically complete. Finally, the signature of the scalar product on $\mathfrak{g}_a$ induced by $(\widetilde{\omega},\widetilde{j})$ is $(4,2)$. \end{example}
\section*{Acknowledgments} I started this work during a visit to the Mathematisches Institut of Albert--Ludwigs--Universit\"at Freiburg in Freiburg--Germany in February 2020. I am very grateful for the hospitality and support that the Research Training Group 1821 gave me when I was there. I would like to thank Andriy Haydys for pointing out the problem of determining left invariant special K\"ahler structures and Edison Fern\'andez Culma for valuable comments and for having pointed out the infinitesimal version of Example \ref{KeyExample1}. I am also grateful for the partial support given by CODI, Universidad de Antioquia, project 2017-15756 Stable Limit Linear Series on Curves.
I am thankful to the anonymous referee who provided many suggestions and corrections that improved the quality of this paper. Last, but not the least, I would like to express my sincere gratitude to my mother and sister for their support and patience when I was writing the first version of the present work.
\end{document} | arXiv |
\begin{definition}[Definition:Degenerate Distribution]
Let $X$ be a discrete random variable on a probability space.
Then $X$ has a '''degenerate distribution with parameter $r$''' {{iff}}:
:$\Omega_X = \set r$
:$\map \Pr {X = k} = \begin {cases}
1 & : k = r \\
0 & : k \ne r
\end {cases}$
That is, there is only value that $X$ can take, namely $r$, which it takes with certainty.
{{ExtractTheorem}}
It trivially gives rise to a probability mass function satisfying $\map \Pr \Omega = 1$.
Equally trivially, it has an expectation of $r$ and a variance of $0$.
\end{definition} | ProofWiki |
Adjacent-vertex-distinguishing-total coloring
In graph theory, a total coloring is a coloring on the vertices and edges of a graph such that:
(1). no adjacent vertices have the same color;
(2). no adjacent edges have the same color; and
(3). no edge and its endvertices are assigned the same color.
In 2005, Zhang et al.[1] added a restriction to the definition of total coloring and proposed a new type of coloring defined as follows.
Let G = (V,E) be a simple graph endowed with a total coloring φ, and let u be a vertex of G. The set of colors that occurs in the vertex u is defined as C(u) = {φ(u)} ∪ {φ(uv) | uv ∈ E(G)}. Two vertices u,v ∈ V(G) are distinguishable if their color-sets are distinct, i.e., C(u) ≠ C(v).
In graph theory, a total coloring is an adjacent-vertex-distinguishing-total-coloring (AVD-total-coloring) if it has the following additional property:
(4). for every two adjacent vertices u,v of a graph G, their colors-sets are distinct from each other, i.e., C(u) ≠ C(v).
The adjacent-vertex-distinguishing-total-chromatic number χat(G) of a graph G is the fewest colors needed in an AVD-total-coloring of G.
The following lower bound for the AVD-total chromatic number can be obtained from the definition of AVD-total-coloring: If a simple graph G has two adjacent vertices of maximum degree, then χat(G) ≥ Δ(G) + 2.[2] Otherwise, if a simple graph G does not have two adjacent vertices of maximum degree, then χat(G) ≥ Δ(G) + 1.
In 2005, Zhang et al. determined the AVD-total-chromatic number for some classes of graphs, and based in their results they conjectured the following.
AVD-Total-Coloring Conjecture. (Zhang et al.[3])
χat(G) ≤ Δ(G) + 3.
The AVD-Total-Coloring Conjecture is known to hold for some classes of graphs, such as complete graphs,[4] graphs with Δ=3,[5][6] and all bipartite graphs.[7]
In 2012, Huang et al.[8] showed that χat(G) ≤ 2Δ(G) for any simple graph G with maximum degree Δ(G) > 2. In 2014, Papaioannou and Raftopoulou[9] described an algorithmic procedure that gives a 7-AVD-total-colouring for any 4-regular graph.
Notes
1. Zhang 2005.
2. Zhang 2005, p. 290.
3. Zhang 2005, p. 299.
4. Hulgan 2009, p. 2.
5. Hulgan 2009, p. 2.
6. Chen 2008.
7. Zhang 2005.
8. Huang2012
9. Papaioannou2014
References
• Zhang, Zhong-fu; Chen, Xiang-en; Li, Jingwen; Yao, Bing; Lu, Xinzhong; Wang, Jianfang (2005). "On adjacent-vertex-distinguishing total coloring of graphs". Science China Mathematics. 48 (3): 289–299. Bibcode:2005ScChA..48..289Z. doi:10.1360/03ys0207. S2CID 6107913.
• Hulgan, Jonathan (2009). "Concise proofs for adjacent vertex-distinguishing total colorings". Discrete Mathematics. 309 (8): 2548–2550. doi:10.1016/j.disc.2008.06.002.
• Chen, Xiang'en (2008). "On the adjacent vertex distinguishing total coloring numbers of graphs with Delta=3". Discrete Mathematics. 308 (17): 4003–4007. doi:10.1016/j.disc.2007.07.091.
• Huang, D.; Wang, W.; Yan, C. (2012). "A note on the adjacent vertex distinguishing total chromatic number of graphs". Discrete Mathematics. 312 (24): 3544–3546. doi:10.1016/j.disc.2012.08.006.
• Chen, Meirun; Guo, Xiaofeng (2009). "Adjacent vertex-distinguishing edge and total chromatic numbers of hypercubes". Information Processing Letters. 109 (12): 599–602. doi:10.1016/j.ipl.2009.02.006.
• Wang, Yiqiao; Wang, Weifan (2010). "Adjacent vertex distinguishing total colorings of outerplanar graphs". Journal of Combinatorial Optimization. 19 (2): 123–133. doi:10.1007/s10878-008-9165-x. S2CID 30532745.
• P. de Mello, Célia; Pedrotti, Vagner (2010). "Adjacent-vertex-distinguishing total coloring of indifference graphs" (PDF). Matematica Contemporanea. 39: 101–110.
• Wang, Weifan; Huang, Danjun (2012). "The adjacent vertex distinguishing total coloring of planar graphs". Journal of Combinatorial Optimization. 27 (2): 379. doi:10.1007/s10878-012-9527-2. S2CID 254642281.
• Chen, Xiang-en; Zhang, Zhong-fu (2008). "AVDTC numbers of generalized Halin graphs with maximum degree at least 6". Acta Mathematicae Applicatae Sinica. 24 (1): 55–58. doi:10.1007/s10878-012-9527-2. S2CID 254642281.
• Huang, Danjun; Wang, Weifan; Yan, Chengchao (2012). "A note on the adjacent vertex distinguishing total chromatic number of graphs". Discrete Mathematics. 312 (24): 3544–3546. doi:10.1016/j.disc.2012.08.006.
• Papaioannou, A.; Raftopoulou, C. (2014). "On the AVDTC of 4-regular graphs". Discrete Mathematics. 330: 20–40. doi:10.1016/j.disc.2014.03.019.
• Luiz, Atílio G.; Campos, C.N.; de Mello, C.P. (2015). "AVD-total-colouring of complete equipartite graphs". Discrete Applied Mathematics. 184: 189–195. doi:10.1016/j.dam.2014.11.006.
| Wikipedia |
\begin{document}
\title{On finite subgroups of groups of type VF} \author{Ian J Leary} \address{Department of Mathematics, The Ohio State University\\231 West 18th Avenue, Columbus, OH 43210-1174, USA} \secondaddress{School of Mathematics, University of Southampton\\Southampton, SO17 1BJ, UK} \asciiaddress{Department of Mathematics, The Ohio State University\\231 West 18th Avenue, Columbus, OH 43210-1174, USA\\and\\School of Mathematics, University of Southampton\\Southampton, SO17 1BJ, UK} \email{[email protected]}
\begin{abstract} For any finite group $Q$ not of prime power order, we construct a group $G$ that is virtually of type $F$, contains infinitely many conjugacy classes of subgroups isomorphic to $Q$, and contains only finitely many conjugacy classes of other finite subgroups. \end{abstract}
\asciiabstract{ For any finite group Q not of prime power order, we construct a group G that is virtually of type F, contains infinitely many conjugacy classes of subgroups isomorphic to Q, and contains only finitely many conjugacy classes of other finite subgroups.}
\primaryclass{20F65} \secondaryclass{19A31, 20E45, 20J05, 57M07} \keywords{Conjugacy classes, finite subgroups, groups of type $F$} \asciikeywords{Conjugacy classes, finite subgroups, groups of type F} \maketitle
\section{Introduction} A group $H$ is said to be of type $F$ if there is a finite classifying space for~$H$, ie, if there exists a finite simplicial complex whose fundamental group is isomorphic to $H$ and whose universal cover is contractible. A group of type $F$ is necessarily torsion-free. It is easily seen that any finite-index subgroup of a group of type $F$ is also of type $F$.
A group $G$ is said to be of type $VF$ if $G$ contains a finite-index subgroup $H$ which is of type $F$, ie, if $G$ is virtually of type $F$. If $H$ has index $n$ in $G$, then the kernel of the action of $G$ on the cosets of $H$ has index at most~$n!$. Hence any group of type $VF$ contains a finite-index normal subgroup of type~$F$, and so for any group $G$ of type $VF$ there is a bound on the orders of finite subgroups of $G$.
K\,S Brown's book `Cohomology of Groups' contains a result that implies that a group of type $VF$ can contain only finitely many conjugacy classes of subgroups of prime power order \cite[IX.13.2]{brown}. The question of whether a group of type $VF$ could ever contain infinitely many conjugacy classes of finite subgroups was posed in \cite{wall,lueck}, and remained unanswered until Brita Nucinkis and the author constructed examples in~\cite{vfg}. These examples may be summarized as follows:
\begin{theorem} \label{vfg} Let $Q$ be a finite group admitting a simplicial action on a finite contractible simplicial complex $L$ such that the fixed point set $L^Q$ is empty. Then there is a group $H_L$ of type $F$ (depending only on $L$) and an action of $Q$ on $H_L$ such that the semi-direct product $H_L{:} Q$ contains infinitely many conjugacy classes of subgroup isomorphic to $Q$. \end{theorem}
R Oliver has shown that a finite group $Q$ admits an action on a finite contractible $L$ without a global fixed point if and only if $Q$ is not expressible as $p$--group-by-cyclic-by-$q$--group for any primes $p$~and~$q$~\cite{bobo}. (Oliver's main result is the construction of actions: the proof that actions do not exist in the other cases is far simpler and we include it in Section~\ref{final}.)
The purpose of this paper is to close the gap between Brown's result and the construction of Theorem~\ref{vfg}. For any finite group $Q$ that is not of prime power order, we construct a group $H$ of type $F$ with an action of $Q$ so that the semi-direct product $H{:} Q$ contains infinitely many conjugacy classes of subgroup isomorphic to $Q$, and finitely many conjugacy classes of other finite subgroups. As a corollary we obtain the following apparently stronger result.
\begin{theorem} \label{freeprod} Let ${\cal Q}=\{Q_1,\ldots, Q_n\}$ be a finite list of isomorphism types of finite group, such that no $Q_i$ is a group of prime power order. There exists a group $G=G({\cal Q})$ of type $VF$ such that $G$ contains infinitely many conjugacy classes of subgroup isomorphic to a finite group $Q$ if and only if $Q\in {\cal Q}$. \end{theorem}
In particular, it follows that a group of type $VF$ may contain infinitely many conjugacy classes of \emph{elements} of finite order, although any such group can only contain finitely many conjugacy classes of elements of prime power order.
Our techniques also apply to other weaker finiteness conditions. Recall that a group $G$ is of type $FP$ over a ring $R$ if the trivial module $R$ for the group ring $RG$ admits a finite resolution by finitely generated projective $RG$--modules, ie, if and only if there is an integer $n$ and an exact sequence of $RG$--modules \[0\rightarrow P_n\rightarrow \cdots \rightarrow P_1\rightarrow P_0 \rightarrow R\rightarrow 0\] in which each $P_i$ is a finitely generated projective. If there exists such a sequence in which each $P_i$ is a finitely generated free module, then $G$ is said to be of type $FL$ over $R$.
In~\cite{vfg} Brita Nucinkis and the author proved the following.
\begin{theorem} \label{fpq} Let $Q$ be a finite group admitting a simplicial action on a finite ${\mathbb Q}$--acyclic simplicial complex $L$ such that the fixed point set $L^Q$ is empty. Then there is a virtually torsion-free group $G=H_L{:} Q$ of type $FP$ over ${\mathbb Q}$ containing infinitely many conjugacy classes of subgroup isomorphic to $Q$. \end{theorem}
R Oliver has shown that a finite group $Q$ admits such an action if and only if $Q$ is not of the form cyclic-by-$p$--group for some prime $p$~\cite{bobo}. In particular, the above construction did not give rise to any groups of type $FP$ over ${\mathbb Q}$ containing infinitely many conjugacy classes of \emph{elements} of finite order. The question of whether such groups can exist was posed by H Bass in~\cite{bass,wall}. One reason why this question is of interest is that if $G$ contains infinitely many conjugacy classes of elements of finite order, then the Grothendieck group $K_0({\mathbb Q} G)$ may be shown to have infinite rank. (We give a proof of this fact below in Theorem~\ref{bass}.)
Any group of type $F$ is of type $FP$ over any ring $R$, and a group $G$ of type $VF$ is of type $FP$ over any ring $R$ in which the orders of all finite subgroups of $G$ are units. In particular, every group of type $VF$ is of type $FP$ over ${\mathbb Q}$. It follows that examples coming from Theorem~\ref{freeprod} may be used to answer Bass's question. By Brown's result, groups of type $VF$ necessarily contain only finitely many conjugacy classes of elements of prime power order. This is not the case for groups of type $FP$ over ${\mathbb Q}$, and in fact for any non-trivial finite group $Q$ we construct a group of type $FP$ over ${\mathbb Q}$ containing infinitely many conjugacy classes of subgroup isomorphic to $Q$, and finitely many conjugacy classes of other finite subgroup. The following is a corollary of our result.
\begin{theorem} \label{freepq} Let ${\cal Q}=\{Q_1,\ldots, Q_n\}$ be a finite list of isomorphism types of non-trivial finite groups. There exists a virtually torsion-free group $G=G({\cal Q})$ of type $FP$ over ${\mathbb Q}$ such that $G$ contains infinitely many conjugacy classes of subgroup isomorphic to a finite group $Q$ if and only if $Q\in {\cal Q}$. \end{theorem}
The groups $H_L$ appearing in the statements of Theorems \ref{vfg}~and~\ref{fpq} are the groups introduced by M Bestvina and N Brady, who used them to solve a number of open problems concerning homological finiteness conditions~\cite{bb}. In particular, in the case when $L$ is a finite acyclic complex that is not contractible, they showed that the group $H_L$ is of type $FL$ over ${\mathbb Z}$ but is not finitely presented. The main idea in~\cite{vfg} was to allow a finite group $Q$ to act on the complex $L$, and hence on the group $H_L$.
The main idea in this paper is to consider Bestvina--Brady groups $H_L$ for \emph{infinite} complexes $L$. If $Q$ is any finite group not of prime power order, then there exists a complex $L$ with a ${\mathbb Z}\times Q$--action such that \begin{enumerate} \item $L$ is contractible;
\item ${\mathbb Z}\times Q$ acts cocompactly on $L$;
\item all cell stabilizers are finite;
\item $\{0\}\times Q$ fixes no point of $L$.
\end{enumerate} The first three properties together imply that the semi-direct product $H_L{:}{\mathbb Z}$ is of type $F$, and the fourth property implies that the semi-direct product $H_L{:}({\mathbb Z}\times Q)$ contains infinitely many conjugacy classes of subgroup isomorphic to $Q$. A construction for $L$ as above in the case when $Q$ is cyclic was given by Conner and Floyd~\cite{cofl}. In Section~\ref{final} we give a construction for arbitrary finite $Q$ which was shown to us by Bob Oliver.
A similar (but simpler) construction involving an infinite ${\mathbb Q}$--acyclic complex $L$ is used in proving our theorem concerning groups of type $FP$ over ${\mathbb Q}$.
In the final section of the paper we discuss some further finiteness properties of the groups that we construct. We show that the groups are residually finite, although we are unable to decide whether they are linear. We also show that each of the groups used in the proofs of Theorems \ref{freeprod}~and~\ref{freepq} occurs as the kernel of a map to ${\mathbb Z}$ from a group that acts cocompactly with finite stabilizers on a CAT(0) cube complex.
The work in this paper builds on the author's joint work with Brita Nucinkis and uses theorems concerning actions of finite groups which the author learned from Bob Oliver. The author gratefully acknowledges their contributions to this work. Some of this work was done at Paris~13 and at the ETH, Z\"urich. The author thanks these institutions for their hospitality.
The author was partially supported by NSF grant DMS-0505471.
\section{Bestvina--Brady groups} \label{bebr} In this section we define the Bestvina--Brady group $H_L$ associated to a flag complex $L$, and we check that some of the results in~\cite{bb,vfg} extend to the case when $L$ is an infinite flag complex.
A flag complex, $L$, is a simplicial complex which contains as many higher dimensional simplices as possible, given its 1--skeleton. In other words, whenever the complete graph on a finite subset of the vertex set of $L$ is contained in the 1--skeleton of $L$, then there is a simplex of $L$ with that set of vertices. The realisation of any poset is a flag complex (since a subset is totally ordered if any two of its members are comparable). In particular, the barycentric subdivision of any simplicial complex is a flag complex.
Given a flag complex $L$, the associated right-angled Artin group $G_L$ is the group with generators the vertices of $L$ subject only to the relations that the ends of each edge commute. There is a model for the classifying space $BG_L$ with one $n$--dimensional cubical cell for each $(n-1)$--simplex of $L$ (including one vertex corresponding to the empty simplex in $L$). Let $X_L$ denote the universal cover of this space. Cells of $X_L$ are $n$--cubes of the form $(g,v_1,\ldots,v_n)$ where $(v_1,\ldots,v_n)$ is an $n-1$--simplex of $L$ and $g$ is an element of $G$. The $i$th pair of opposite faces of this $n$--cube consists of the cubes $(g,v_1,\ldots,\hat v_i,\ldots, v_n)$ and $(gv_i,v_1,\ldots,\hat v_i,\ldots,v_n)$, where $gv_i$ is the product of two elements of $G_L$, and as usual $\hat v_i$ means `omit $v_i$'. The action of $G_L$ is given by $g'(g,v_1,\ldots,v_n)= (g'g,v_1,\ldots,v_n)$. If $\sigma= (v_1,\ldots,v_n)$ is a simplex of $L$, we will write $(g,\sigma)$ in place of $(g,v_1\ldots,v_n)$. In particular, we will write $(g)$ for a vertex of $X_L$.
The space $X_L$ admits the structure of a CAT(0) cubical complex: there is a geodesic CAT(0) metric on $X_L$ in which each cubical cell is isometric to a standard Euclidean unit cube, and the action of $G_L$ is by isometries of this metric. In the case when $L$ is infinite, $X_L$ is not locally finite, and the metric topology on $X_L$ is not the same as the CW--topology, but this will not cause any difficulties.
Suppose now that $f\co L\rightarrow L'$ is a simplicial map. Then $f$ defines a group homomorphism $f_*\co G_L\rightarrow G_{L'}$, which takes the generator $v$ to the generator $f(v)$, and $f$ defines a piecewise-linear continuous map $f_!\co X_L\rightarrow X_{L'}$, which takes the vertex $(g)$ to the vertex $(f(g))$, and extends linearly across each cube. The map $f_!$ is $G_L$--equivariant, where $f_*$ is used to define the $G_L$--action on $X_{L'}$, and so induces a map from $X_L/G_L$ to $X_{L'}/G_{L'}$, which is an explicit construction for the map $B(f_*)\co BG_L\rightarrow BG_{L'}$. If $f$ is an injective simplicial map, then $f_*$ is an injective group homomorphism and $f_!$ is an isometric embedding.
Two special cases of this construction are of interest to us. Firstly, any group $\Gamma$ of automorphisms of $L$ gives rise to a group of automorphisms of $G_L$ and to a group of cellular automorphisms of $X_L/G_L$. Since the unique vertex in $X_L/G_L$ is fixed by $\Gamma$, the group of all lifts of elements of $\Gamma$ to the covering space $X_L\rightarrow X_L/G_L$ is the semi-direct product $G_L{:}\Gamma$, where $\Gamma$ acts on $G_L$ via the action described above.
Secondly, let $*$ denote a 1--point simplicial complex. For this choice of simplicial complex, $G_*$ is infinite cyclic, and $X_*$ is the real line triangulated with one orbit of vertices and one orbit of edges. For any $L$, there is a unique map $f_L\co L\rightarrow *$, and the Bestvina--Brady group $H_L$ is defined to be the kernel of $f_{L*}\co G_L\rightarrow G_*$. The map $f_{L!}\co X_L\rightarrow X_*\cong {\mathbb R}$ may be considered as defining a `height function' on $X_*$. Identifying the integers ${\mathbb Z}\subseteq {\mathbb R}$ with the vertex set in $X_*$, one sees that $f_{L!}$ sends each vertex of $X_L$ to an integer, and that each cube of $X_L$ has a unique minimal and maximal vertex for this height function. For the cube $C$, we shall write $\min(C)$ and $\max(C)$ respectively for its minimal and maximal vertices. Any simplicial map $f\co L\rightarrow L'$ fits in to a commutative triangle with $f_L\co L\rightarrow *$ and $f_{L'}\co L'\rightarrow *$, and hence one obtains an induced map $f_*\co H_L\rightarrow H_{L'}$. In particular, if $\Gamma$ is a group of simplicial automorphisms of $L$, then the semi-direct product $H_L{:}\Gamma$ is defined and is equal to the kernel of the composite $G_L{:}\Gamma\rightarrow G_*\times \Gamma\rightarrow G_*$.
The work of Bestvina and Brady~\cite{bb} relies on a study of the height function $f\co X_L\rightarrow X_*={\mathbb R}$. We recall part of this, and check that it applies to the case when $L$ is infinite (which was not considered in~\cite{bb}).
Pick a point $c$ in the interior of an edge of $X_*$, and define $Y=Y_L= f^{-1}(c)\subseteq X_L$. (The point $c$ will remain fixed for the remainder of this section, but will be suppressed from the notation.) Give $Y$ the structure of a polyhedral CW--complex by taking as cells the sets $C\cap Y$ where $C$ is a cube of $X_L$. Note that the CW--structure on $Y$ gives rise to the same topology as the subspace topology coming from the CW--topology on $X$.
Now let $C$ be a cube in $X_L$ whose highest vertex is $v_1$. Define a subset $C_c$ of $C$ by $$C_c = C \cap f^{-1}([c,\infty)) = C \cap f^{-1}([c,f(v_1)]).$$ Similarly, if the lowest vertex of $C$ is $v_0$, define a subset $C^c$ by $$C^c = C \cap f^{-1}((-\infty,c]) = C \cap f^{-1}([f(v_0),c]).$$ If $C= (g,\sigma)$ for some simplex $\sigma\in L$, then the link of $v_1$ in $C$ is homeomorphic to $\sigma$. It follows that if $f(v_1)>c$, then $C_c$ is homeomorphic to the cone on $L$ with vertex $v_1$. If $f(v_1)<c$, then $C_c$ is empty. Similarly, if $f(v_0)<c$ then $C^c$ is empty, and otherwise $C^c$ is homeomorphic to the cone on $\sigma$. Now for $v$ a vertex of $X_L$, define $F(v)$ to be either \[F(v) = \begin{cases}\bigcup_{v=\max(C)}\,\, C_c & f(v)>c\cr
\bigcup_{v=\min(C)}\,\, C^c & f(v)<c\end{cases} \] For each $v$, one may show that $F(v)$ is homeomorphic to the cone on $L$ with vertex $v$. (Here, as usual, we are using the CW--topology on both $F(v)$ and the cone on $L$.) Now for $a,b\in X_*={\mathbb R}$ with $a<c<b$, define a subspace $Y_{[a,b]}$ of $X_L$ by \[Y_{[a,b]} = Y\cup \bigcup_{a\leq f(v)\leq b}F(v).\] Each $Y_{[a,b]}$ is a CW--complex, with cells the truncated cubes $C_c$, $C^c$ and $C\cap Y$ for each cube $C$ of $X_L$, and if $\alpha\leq a<c<b\leq \beta$, then $Y_{[a,b]}$ is a subcomplex of $Y_{[\alpha,\beta]}$. As $a$ decreases (resp.\ $b$ increases) the complex $Y_{[a,b]}$ only changes as $a$ (resp.\ $b$) passes through an integer. For each $a<c<b$, one has that $Y_{[a-1,b+1]}$ is homeomorphic to $Y_{[a,b]}$ with a family of subspaces homeomorphic to $L$ coned off. (There is one such cone for each vertex in $Y_{[a-1,b+1]}-Y_{[a,b]}$.) Thus one obtains the following lemma and corollary due to Bestvina--Brady~\cite{bb}, for any simplicial complex $L$.
\begin{lemma} If $L$ is contractible, then for any $a<c<b$, the inclusion of $Y$ in $Y_{[a,b]}$ is a homotopy equivalence. If $L$ is $R$--acyclic for some ring $R$, then for any $a<c<b$, the inclusion of $Y$ in $Y_{[a,b]}$ induces an isomorphism of $R$--homology. \end{lemma}
\begin{corollary} \label{besbra} If $L$ is contractible, then $Y$ is contractible. If $L$ is $R$--acyclic, then $Y$ is $R$--acyclic. \end{corollary}
\begin{proof} We know that $X_L$ is contractible, and the lemma implies that the inclusion $Y\rightarrow X_L$ is a homotopy equivalence if $L$ is contractible and is an $R$--homology isomorphism if $L$ is $R$--acyclic. \end{proof}
\begin{theorem}\label{type} Suppose that $\Gamma$ acts freely cocompactly on a simplicial complex~$L$. If $L$ is contractible, then $H_L{:}\Gamma$ is type $F$. If $L$ is $R$--acyclic, then $H_L{:}\Gamma$ is type $FL$ over $R$. \end{theorem}
\begin{proof} It follows from Corollary~\ref{besbra} that $Y$ is contractible or $R$--acyclic whenever $L$ is. Thus it suffices to show that $H_L{:}\Gamma$ acts freely cocompactly on $Y$. To see this, first note that $G_L{:}\Gamma$ has only finitely many orbits of cells in its action on $X_L$. If $C$ is an $n$--cube of $X_L$ with top vertex $v$, then $C\cap Y$ is non-empty if and only if $c<f(v)< c+n$. It follows that each $G_L{:}\Gamma$--orbit of $n$--cubes in $X_L$ gives rise to exactly $n$ $H_L{:}\Gamma$--orbits of $(n-1)$--cells in $Y$. \end{proof}
It remains to study the conjugacy classes of finite subgroups of groups of the form $H_L{:}\Gamma$ and $G_L{:}\Gamma$. In fact it is no more difficult to study conjugacy classes of subgroups $Q'$ such that $Q'\cap G_L= \{1\}$. Consider first the collection of subgroups $\Gamma'$ of $G_L{:}\Gamma$ which map isomorphically to $G_L{:}\Gamma/G_L\cong \Gamma$. The action of $\Gamma$ on $X_L/G_L$ fixes the unique vertex. It follows that each such $\Gamma'$ fixes some vertex $v$ of $X_L$. Since the vertices form a single orbit, it follows that all such $\Gamma'$ are conjugate in $G_L{:}\Gamma$.
\begin{proposition} Let $\Gamma$ act on $L$, let $Q\leq \Gamma$, and let $Q'$ be any subgroup of $G_L{:}\Gamma$ that maps isomorphically to $Q\leq \Gamma= G_L{:}\Gamma/G_L$. If $L^Q=\emptyset$, then $Q'$ fixes a unique vertex in $X_L$. If $L^Q$ contains the barycentre of an $m$--simplex, and $Q'$ fixes a vertex $(g)\in X_L$ of height $f(g)=a$, then $Q'$ also fixes a vertex of height $a+(m+1)n$ for each integer $n$. \end{proposition}
\begin{remark} Since we are not assuming that the action of $\Gamma$ on $L$ makes $L$ into a $\Gamma$--CW--complex, it is not necessarily the case that $L^Q$ is a subcomplex of $L$. However there can be a point of $L^Q$ in the interior of the simplex $\sigma$ only if $q\sigma=\sigma$ for all $q\in Q$. In this case the barycentre of $\sigma$ is a point fixed by $Q$. \end{remark}
\begin{proof} For the first time, we shall make use of the CAT(0) metric on $X_L$. Suppose that $Q'$ fixes two distinct vertices $(g),(h)$ of $X_L$. Since geodesics in a CAT(0) metric space are unique, it follows that the geodesic arc from $(g)$ to $(h)$ is also fixed by $Q'$. The start of this arc is a straight line passing from $(g)$ into the interior of $C$, an $n$--cube of $X_L$ for some $n>0$, which must be preserved (setwise) by $Q'$. If $C=(g',v_1,\ldots,v_n)$, then it follows that the $(n-1)$--simplex $(v_1,\ldots,v_n)$ in $L$ is (setwise) preserved by $Q$, and hence that $L^Q\neq \emptyset$.
For the second statement, suppose that $(g)$ is fixed by $Q'$, and that the $m$--simplex $(v_0,\ldots,v_m)$ in $L$ is setwise fixed by $Q$. Then the long diagonal from $(g)$ to $(gv_0v_1\cdots v_m)$ in the $(m+1)$--cube $(g,v_0,\ldots,v_m)$ is an arc fixed by $Q'$, which connects two vertices whose heights differ by $m+1$. It follows that for any $n$, the vertex $g(v_0v_1\cdots v_m)^n$ is fixed by $Q'$. \end{proof}
\begin{theorem} \label{conjclass} Let $\Gamma$ act on $L$, and let $Q\leq \Gamma$. If $L^Q=\emptyset$, then there are infinitely many conjugacy classes of subgroups $Q'$ of $H_L{:}\Gamma$ whose members map isomorphically to conjugates of $Q$ in $\Gamma$. If $L^Q$ contains the barycentre of an $m$--simplex, then there are at most $m+1$ conjugacy classes of such $Q'$ in $H_L{:}\Gamma$. In particular, if $L^Q$ contains a vertex of $L$, then any two such subgroups are conjugate. \end{theorem}
\begin{proof} We know that any such $Q'$ fixes a vertex of $X_L$ and that every vertex is fixed by some such $Q'$. In the case when $L^Q=\emptyset$, each $Q'$ fixes exactly one vertex of $X_L$. Since vertices of different heights are in different orbits for the action of $H_L{:}\Gamma$, it follows that in this case there are infinitely many conjugacy classes of such $Q'$.
In general, $H_L$ acts transitively on the vertices of fixed height. If $L^Q$ contains the barycentre of an $m$--simplex, and $Q'$ fixes a vertex of height $a$, then $Q'$ also fixes a vertex of height $a+(m+1)n$ for each $n$. Hence given any $(m+2)$ subgroups of $H_L{:}\Gamma$ which map isomorphically to $Q$ or one of its conjugates, some pair $Q'$, $Q''$ of these subgroups must fix vertices of the same height. Let $\Gamma'\geq Q'$ and $\Gamma''\geq Q''$ be the stabilizers of these vertices, which map isomorphically to $\Gamma$. The subgroups $\Gamma'$ and $\Gamma''$ are conjugate by an element of $H_L$. Hence it follows that $Q'$ and $Q''$ are conjugate by some element of $H_L{:}\Gamma$. \end{proof}
\section{Group actions}
\label{final} Here we construct the actions of finite groups $Q$ and direct products of the form ${\mathbb Z}\times Q$ on finite-dimensional simplicial complex that are needed in order to apply the constructions of the previous section. The first two propositions are included to show why actions of finite groups alone cannot give all the examples that we need.
\begin{proposition} Suppose that $Q$ is a finite group with normal subgroups $P\leq P'$, so that $P$ and $Q/P'$ are groups of prime power order and $P'/P$ is cyclic. For any action of $Q$ on a finite contractible complex $L$, the fixed point set $L^Q$ is non-empty. \end{proposition}
\begin{proof} Let $p$ and $q$ be the primes (not necessarily distinct)
so that $|P|$ is a power of $p$ and $|Q:P|$ is a power of $q$. Let $C$ denote $P'/P$, and let $Q'$ denote $Q/P'$.
By P\,A Smith theory~\cite[VII.10.5]{brown}, the fixed point set $L'=L^P$ has the same mod-$p$ homology as a point, and hence has Euler characteristic equal to~1. By character theory, it follows that the Euler characteristic of $L''=L^{P'}= {L'}^C$ is equal to~1. By counting lengths of orbits of cells, one sees that $L^Q= {L''}^{Q'}$ has Euler characteristic congruent to~1 modulo~$q$. This implies that $L^Q$ is not empty. \end{proof}
The above proof also gives:
\begin{proposition} Let $Q$ be a finite group with a normal cyclic subgroup $P'$ so that $Q/P'$ is a group of prime power order. For any action of $Q$ on a finite complex $L'$ with Euler characteristic $\chi(L')=1$, the fixed point set ${L'}^Q$ is non-empty. \end{proposition}
The actions on ${\mathbb Q}$--acyclic spaces that we shall need will all come from Theorem~\ref{qacyc}. In the proof of we shall need Lemma~\ref{wall} concerning Wall's finiteness obstruction.
Suppose that $G$ is a group of type $FP$ over the ring $R$, and that $$0\rightarrow P_n\rightarrow \cdots \rightarrow P_1\rightarrow P_0 \rightarrow R\rightarrow 0$$ is a resolution of $R$ over $RG$ by finitely generated projectives. As usual, let $K_0(RG)$ denote the Grothendieck group of finitely-generated projective $RG$--modules. The Wall obstruction or Euler characteristic of $G$ over $R$ is the element of $K_0(RG)$ given by the alternating sum $$w(R,G)= \sum_i (-1)^i[P_i]$$ and is independent of the choice of resolution \cite[I.7]{rosenberg}.
\begin{lemma} Let $Q$ be a finite group. The group $G={\mathbb Z}\times Q$ is $FP$ over ${\mathbb Q}$, and the Wall obstruction for this group is zero. \label{wall} \end{lemma}
\begin{proof} Let the group $G={\mathbb Z}\times Q$ act on the real line via the projection $G\rightarrow {\mathbb Z}$. There is a $G$--equivariant triangulation of the line with one orbit of 0--cells of type $G/Q$ and one orbit of 1--cells, also of type $G/Q$. The cellular chain complex for this space gives a projective resolution for ${\mathbb Q}$ over ${\mathbb Q} G$ of length one: $$0\rightarrow {\mathbb Q} G/Q \rightarrow {\mathbb Q} G/Q \rightarrow {\mathbb Q} \rightarrow 0,$$ in which the modules in degrees 0 and 1 are isomorphic to each other. \end{proof}
\begin{theorem} Let $Q$ be a finite group, and let $\cal F$ be a non-empty family of subgroups of $Q$ which is closed under conjugation and inclusion. There is a 3--dimensional ${\mathbb Q}$--acyclic simplicial complex $L$ admitting a cocompact action of $\Gamma={\mathbb Z}\times Q$ so that all cell stabilizers are finite and so that $P\leq Q$ fixes some point of $L$ if and only if $P\in \cal F$. \label{qacyc} \end{theorem}
\begin{proof} Let $\Delta$ be a finite set with a $Q$--action, such that $\Delta^P\neq \emptyset$ if and only if $P\in \cal F$, and let $Z=Q*Q*\Delta$ be the join of two copies of $Q$ and one copy of $\Delta$, with the diagonal action of $Q$. This $Z$ is a 2--dimensional simply-connected $Q$--space, with the property that the $Q$--action is free except on the 0--skeleton. Let ${\mathbb Z}$ act on ${\mathbb R}$ in the usual way, and let $L_0$ be the product ${\mathbb R}\times Z$ with the product action of $\Gamma={\mathbb Z}\times Q$. Now let $L_1$ be the 2--skeleton of $L_0$. The cells of $L_1$ in non-free orbits form a copy of ${\mathbb R}\times \Delta$ with the product action of $\Gamma$. Let $C_*$ be the rational chain complex for $L_1$. Since $L_1$ is 1--connected, $C_*$ forms the start of a projective resolution for ${\mathbb Q}$ over ${\mathbb Q}\Gamma$. As ${\mathbb Q}\Gamma$--modules, $C_2$ is free and each of $C_1$ and $C_0$ is the direct sum of a free module and a copy of ${\mathbb Q}[{\mathbb Z}\times\Delta]$. Hence the element of $K_0({\mathbb Q}\Gamma)$ defined by the alternating sum $[C_2]-[C_1]+[C_0]$ is in the subgroup of $K_0({\mathbb Q}\Gamma)$ generated by the free module. Since we know by Lemma~\ref{wall} that the Wall obstruction for $\Gamma$ over ${\mathbb Q}$ is zero, it follows that $H_2(C_*)$ is a stably-free ${\mathbb Q}\Gamma$--module. Make $L_2$ by attaching finitely many free $\Gamma$--orbits of 2--spheres to $L_1$ in such a way that $H_2(L_2;{\mathbb Q})$ is a free ${\mathbb Q}\Gamma$--module. Let $c_1,\ldots,c_k$ be cycles in $C_2(L_2,{\mathbb Q})$ representing a ${\mathbb Q}\Gamma$--basis for $H_2(L_2;{\mathbb Q})$, and pick a large integer $M$ so that each $M.c_i$ is an integral cycle. Since $L_2$ is 1--connected, each $M.c_i$ is realized by the image of the fundamental class for $S^2$ under some map $f_i\co S^2\rightarrow L_2$. Now define $L_3$ by attaching free $\Gamma$--orbits of 3--balls to $L_2$, using the $f_i$ as attaching maps for orbit representatives. This $L_3$ has all of the required properties, except that it has been constructed as a $\Gamma$--CW--complex rather than as a $\Gamma$--simplicial complex. By the simplicial approximation theorem, we can construct a 3--dimensional $\Gamma$--simplicial complex $L$ together with an equivariant homotopy equivalence $L\rightarrow L_3$. \end{proof}
Before stating and proving Theorem~\ref{contr}, which will provide all the actions on contractible spaces that we shall need, we begin by establishing some notation, and proving a lemma concerning equivariant self-maps of spheres in linear representations. Lemma~\ref{sphe} and Theorem~\ref{contr} were shown to the author by Bob Oliver.
Let $S$ denote the unit sphere in ${\mathbb C}^n$, so that $S$ is a sphere of odd dimension. For $x\in S$, let $T_xS$ be the tangent space to $S$ at $x$, and let $B_x$ be the closed unit ball in $T_xS$, with boundary $\partial B_x$. For $\epsilon>0$, let $e_{\epsilon,x}\co B_x\rightarrow S$ denote the scalar multiple of the exponential map such that the image of $B_x$ is a ball of radius $\epsilon$ in $S$. In the case when $\epsilon=\pi$, this map sends the whole of $\partial B_x$ to the point $-x$. The cases of interest to us include the case $\epsilon =\pi$ and the case when $\epsilon$ is small. Suppose that a finite group $P$ acts linearly on $S$, fixing the point $x$. This induces a $P$--action on $T_xS$, and the exponential map $e_{\epsilon,x}$ is $P$--equivariant in the sense that $e_{\epsilon,x}(gv)= ge_{\epsilon,x}(v)$ for all $v\in B_x$ and all $g\in P$.
Each of the self-maps of spheres that we shall construct will have the property that it is equal to the identity except on a number of small balls. For such a map $f\co S\rightarrow S$, its support, ${\mathrm{supp}}(f)$, is defined to be the closure of the set of points $x\in S$ so that $f(x)\neq x$. Given another such map $f'\co S\rightarrow S$ with ${\mathrm{supp}}(f)\cap {\mathrm{supp}}(f')=\emptyset$, the map $f\coprod f'$ is defined by $$f\textstyle{\coprod} f' (x) = \begin{cases} f(x) & \,\, x\in {\mathrm{supp}}(f)\cr f'(x) & \,\, x\in {\mathrm{supp}}(f')\cr x&\,\, x\notin {\mathrm{supp}}(f)\cup {\mathrm{supp}}(f'). \end{cases} $$ Suppose that a group $Q$ acts on $S$. For $f\co S\rightarrow S$ a self-map of $S$ and $g\in Q$, define $g*f(s) = g(f(g^{-1}(s)))$. The support of $g*f$ is equal to $g.{\mathrm{supp}}(f)$.
For $x\in S$, let $r\co (B_x,\partial B_x)\rightarrow (B_x,\partial B_x)$ be any map of degree $-1$, for example a reflection in a hyperplane through $0$ in $B_x$. Define $\tilde\phi_x,\tilde\psi_x: B_x\rightarrow S$ by \begin{eqnarray*} \tilde\phi_x(v) = \begin{cases}
-e_{\pi,x}(2v)& |v| \leq 1/2\cr
(|v|-1/2)v & |v| \geq 1/2\cr \end{cases} \cr \tilde\psi_x(v) = \begin{cases}
-e_{\pi,x}(r(2v))& |v| \leq 1/2\cr
(|v|-1/2)v & |v| \geq 1/2\cr \end{cases} \cr \end{eqnarray*} and define self-maps $\phi_{\epsilon,x}$ and $\psi_{\epsilon,x}$ to be the identity outside of the image of $e_{\epsilon,x}$ and equal to $\tilde\phi_x\circ e_{\epsilon,x}^{-1}$ and $\tilde\psi_x\circ e_{\epsilon,x}^{-1}$ respectively on their supports. If $f$ is a self-map of $S$ of degree $n$ whose support is disjoint from the $\epsilon$--ball around $x$, then $f\coprod \phi_{\epsilon,x}$ is a self-map of degree $n+1$ and $f\coprod \psi_{\epsilon,x}$ is a self-map of degree $n-1$.
Suppose that a finite group $Q$ acts linearly on $S$, so that the distance between any two points of the orbit $Q.x$ is greater than $2\epsilon$. If $g\in Q$, then $g*\phi_{\epsilon,x}$ and $\phi_{\epsilon,g.x}$ are equal. In particular, if $g$ is an element of $Q_x$, the stabilizer of the point $x$, then the equation $g*\phi_{\epsilon,x} = \phi_{\epsilon,x}$ holds. Since the definition of $\psi$ involved an arbitrary choice of function $r$, there is no corresponding equivariance property for the $\psi$ self-maps. However, the map $g*\psi_{\epsilon,x}$ is a self-map whose support is the $\epsilon$--ball in $S$ centred at $g.x$, and if $f$ is any self-map of $S$ whose support is disjoint from this ball, the coproduct $f\coprod g*\psi_{\epsilon,x}$ is a self-map whose degree is one less than that of $f$.
For any $x\in S$, define $$Q.\phi_{\epsilon,x}= \coprod_{g\in Q/Q_x} g*\phi_{\epsilon,x},$$ for any sufficiently small $\epsilon$, where the sum runs over a transversal to $Q_x$ in $Q$. For $x$ in a free $Q$--orbit, define $$Q.\psi_{\epsilon,x} = \coprod_{g\in Q} g*\psi_{\epsilon,x},$$ for small $\epsilon$. Each of these maps is $Q$--equivariant.
\begin{lemma} Let $S$ be the unit sphere in a complex representation
of the finite group $Q$, and suppose that $S$ contains points in $Q$--orbits of coprime lengths. Then $S$ admits a $Q$--equivariant self-map of degree zero. \label{sphe} \end{lemma}
\begin{proof} Without loss of generality, we may suppose that $Q$ acts faithfully on $S$. The action of the unit circle in ${\mathbb C}$ on $S$ commutes with the $Q$--action, and so whenever $S$ contains a $Q$--orbit of a given length, $S$ contains infinitely many $Q$--orbits of that length. Pick points $x_1,\ldots,x_m$
in distinct $Q$--orbits, such that the sum of the lengths of the orbits is congruent to $-1$ modulo $|Q|$, ie, so that there exists $n$ with
$$|Q|n = 1+ \sum_{i=1}^m |Q.x_i|.$$ Now pick $y_1,\ldots, y_n$ in distinct free $Q$--orbits. Choose $\epsilon$ sufficiently small that any two points in any of these orbits are separated by more than $2\epsilon$. The coproduct $$f= \coprod_{i=1}^mQ.\phi_{\epsilon,x_i} {\textstyle{\coprod}} \coprod_{j=1}^n Q.\psi_{\epsilon,y_j}$$ is the required degree zero map. \end{proof}
\begin{theorem}\label{contr} Let $Q$ be a finite group not of prime power order. Then there exists a finite-dimensional contractible simplicial complex $L$ with a cocompact action of ${\mathbb Z}\times Q$ such that all stabilizers are finite and such that $L^Q=\emptyset$. Furthermore, $L$ may be chosen in such a way that $L^P\neq \emptyset$ for $P$ any proper subgroup of $Q$. \end{theorem}
\begin{proof} Let $S$ be the unit sphere in the `reduced regular
representation of $Q$', ie, the regular representation ${\mathbb C} Q$
minus the trivial representation. This $S$ has the property that $S^Q=\emptyset$ but $S^P\neq \emptyset$ for any proper subgroup $P<Q$. Since $Q$ is not of prime power order, $S$ satisfies the
hypotheses of Lemma~\ref{sphe}, and so there exists a
$Q$--equivariant map $f\co S\rightarrow S$ of degree zero.
Take a $Q$--equivariant triangulation of the space $I\times S$, where $Q$ acts trivially on the interval $I$. By the simplicial approximation theorem, there is an integer $n\geq 0$ and a simplicial map $f'\co \{1\}\times S^{(n)}\rightarrow \{0\}\times S$ which is equivariantly homotopic to $f$. Now let $M$ be the $n$th barycentric subdivision of $I\times S$ relative to $\{0\}\times S$. This is a copy of $I\times S$, with the original triangulation on the subspace $\{0\}\times S$ and the $n$th barycentric subdivision of this triangulation on $\{1\}\times S$. Construct $L$ from the direct product ${\mathbb Z}\times M$ by identifying $(m,1,s)$ with $(m+1,0,f'(s))$ for each $s\in S$ and $m\in {\mathbb Z}$. This space $L$ is a triangulation of the doubly infinite mapping telescope of the map $f'\co S\rightarrow S$. The fact that $f'$ has degree zero implies that $L$ is contractible. \end{proof}
One difference between Theorem~\ref{qacyc} and Theorem~\ref{contr} is that the the dimension of the space constructed in Theorem~\ref{contr} varies with $Q$. The final results in this section show that this difference cannot be avoided.
\begin{lemma} Let $Q$ be the special linear group $SL_n({\mathbb F}_p)$ over the
field of $p$ elements. Let $e_1,\ldots,e_n$ be the standard basis for the vector space ${\mathbb F}_p^n$. Define elements $\tau_1,\ldots,\tau_n\in Q$ by $$\tau_i(e_j) = \begin{cases} e_j & i\neq j\cr e_i+ e_{i+1} & i=j<n\cr e_n+e_1 & i=j=n.\cr \end{cases} $$ The elements $\tau_1,\ldots,\tau_n$ generate $Q$, and any proper subset of them generates a subgroup of order a power of $p$. \label{gens} \end{lemma}
\begin{proof} Let $\theta$ be the cyclic permutation of the $n$ standard basis elements, so that $\theta(e_i) = e_{i+1}$ for $i<n$ and $\theta(e_n)=\theta_1$. The action of $\theta$ on $Q$ by conjugation induces a cyclic permutation of the $\tau_i$.
The elements $\tau_1,\ldots,\tau_{n-1}$ generate the upper triangular matrices, which form a Sylow $p$--subgroup of $Q$. This group contains each of the elementary matrices $E_{i,j}$ for $i<j$, defined by $$E_{i,j}(e_k) = \begin{cases} e_k & k\neq i\cr e_k + e_j & k = i.\cr \end{cases}$$ Conjugation by powers of $\theta$ induces a transitive permutation of the size $n-1$ subsets of $\tau_1,\ldots,\tau_n$. Hence one sees that each such set generates a Sylow $p$--subgroup of $Q$.
It is well-known that the elementary matrices $E_{i,j}$ for all $i\neq j$ form a generating set for $Q$. Each elementary matrix may be expressed as the conjugate of an upper triangular elementary matrix by some power of $\theta$. It follows that the subgroup generated by $\tau_1,\ldots,\tau_n$ contains all elementary matrices and so is equal to $Q$. \end{proof}
\begin{theorem} \label{slnp} As in the previous lemma, let $Q=SL_n({\mathbb F}_p)$.
Suppose that $L$ is contractible, or that $L$ is mod-$p$ acyclic, and that $Q$ acts on $L$ so that $L^Q=\emptyset$. Then the dimension of $L$ is at least $n-1$. \end{theorem}
\begin{proof} We may assume that $L$ is finite-dimensional, or there
is nothing to prove. Let $L_i$ be the fixed point subspace for the action of
$\tau_i$. By P. A. Smith theory, the fixed point set for the action of a $p$--group on a finite-dimensional mod-$p$ acyclic space is itself mod-$p$ acyclic. From Lemma~\ref{gens} it follows that each intersection of at most $n-1$ of the $L_i$ is mod-$p$ acyclic, and that the intersection $L_1\cap\ldots \cap L_n$ is empty. Let $X$ be the union of the $L_i$. The Mayer--Vietoris spectral sequence for the covering of $X$ by the $L_i$ with mod-$p$ coefficients is isomorphic to the spectral sequence for the covering of the boundary of an $(n-1)$--simplex by its faces. It follows that the mod-$p$ homology of $X$ is isomorphic to the mod-$p$ homology of an $(n-2)$--sphere. Hence $X$ cannot be a subspace of a mod-$p$ acyclic space of dimension strictly less than~$n-1$. \end{proof}
\begin{remark} For a discrete group $G$, the minimal dimension of any contractible simplicial complex admitting a $G$--action without a global fixed point is an interesting invariant of $G$. The above theorem shows that this invariant can take arbitrarily large finite values. When $G$ is a finite group of prime power order, the invariant takes the value infinity. Peter Kropholler has asked whether there are any other finitely generated groups $G$ for which the invariant takes the value infinity. \end{remark}
\section{Examples} \label{examps}
Here we combine the results of Sections \ref{bebr}~and~\ref{final} to construct groups with strong homological finiteness properties that contain infinitely many conjugacy classes of certain finite subgroups.
\begin{theorem} \label{ivf} Let $Q$ be a finite group not of prime power order. There is a group $H$ of type $F$ and a group $G=H{:} Q$ such that $G$ contains infinitely many conjugacy classes of subgroup isomorphic to $Q$ and finitely many conjugacy classes of other finite subgroups. \end{theorem}
\begin{proof} By Theorem~\ref{contr}, there is a contractible finite-dimensional simplicial complex $L$ with a cocompact action of ${\mathbb Z}\times Q$ such that all stabilizers are finite, $L^Q=\emptyset$ and $L^P\neq \emptyset$ if $P<Q$. Take a flag triangulation of $L$, and consider the Bestvina--Brady group $H_L$. By Theorem~\ref{type}, the semi-direct product $H=H_L{:} {\mathbb Z}$ is of type $F$. By Theorem~\ref{conjclass} the group $G=H_L{:}({\mathbb Z}\times Q)$ contains infinitely many conjugacy classes of subgroups isomorphic to $Q$ and finitely many conjugacy classes of other finite subgroups. \end{proof}
We can now prove Theorem~\ref{freeprod} as stated in the introduction. We first give a lemma concerning free products.
\begin{lemma} \label{prodlem} Let $G=G_1*\cdots *G_n$ be a free product of groups, and let $H_i$ be a finite-index normal subgroup of $G_i$. There is a bijection between conjugacy classes of non-trivial finite subgroups of $G$ and the disjoint union of the sets of conjugacy classes of non-trivial finite subgroups of the $G_i$. The kernel of the map from $G$ to $\prod_i G_i/H_i$ is isomorphic to the free product of finitely many copies of the $H_i$ and a finitely-generated free group. \end{lemma}
\begin{proof} Take a classifying space $BG_i$ for each $G_i$, take a star-shaped tree with $n$ edges whose central vertex has valency $n$, and make a classifying space $BG$ for $G$ by attaching the given $BG_i$ to the $i$th boundary vertex of the tree. Now consider the regular covering of this space $BG$ corresponding to the kernel of the homomorphism $G\rightarrow \prod_i G/H_i$. This is a finite covering. The subspace of this covering lying above each $BG_i$ is a finite disjoint union of copies of $BH_i$, and the subspace lying above the tree is a finite disjoint union of trees. Hence the whole space, which is a classifying space for the kernel, consists of a finite number of copies of the $BH_i$'s, connected together by a finite number of trees. The fundamental group of such a space is the free product of finitely many copies of the $H_i$ and a finitely-generated free group.
For the claimed result concerning conjugacy classes of finite subgroups, we consider the tree obtained from the given expression for $G$ as a free product. One way to construct this tree is by considering the universal covering space of the model for $BG$ given above. This consists of copies of the $EG_i$'s, connected together by trees. Now contract each copy of $EG_i$ to a point. The resulting $G$--space is contractible (since replacing $EG_i$ by a single point does not change its homotopy type) and is 1--dimensional. It is therefore a $G$--tree, with $n+1$ orbits of vertices and $n$ orbits of edges. Each edge orbit is free, one of the vertex orbits is free, and there is one vertex orbit of type $G/G_i$ for each $1\leq i\leq n$. Whenever a finite group acts on a tree, it has a fixed point. (To see this, take the finite subtree spanning an orbit, and peel off orbits of `leaves' until the remainder is fixed.) Since the stabilizer of each edge is trivial, it follows that each non-trivial finite subgroup of $G$ must fix exactly one vertex of the tree. This implies that each non-trivial finite subgroup of $G$ is conjugate to a subgroup of exactly one of the $G_i$, and that two finite subgroups of $G_i$ are conjugate in $G$ if and only if they were already conjugate in $G_i$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{freeprod}] Let ${\cal Q}=\{Q_1,\ldots,Q_n\}$ be a finite list of isomorphism types of finite groups not of prime power order. For each $Q_i$, let $G_i=H_i{:} Q_i$ be a group as in Theorem~\ref{ivf}. Let $G$ be $G_1*\cdots * G_n$, the free product of the $G_i$. By Lemma~\ref{prodlem}, the group $G$ is of type $VF$, contains infinitely many conjugacy classes of subgroup isomorphic to each $Q_i$, and contains finitely many conjugacy classes of finite subgroups of all other isomorphism types. \end{proof}
\begin{theorem} \label{fpq2} Let $Q$ be a non-trivial finite group. There exists a group $G=H{:} Q$ of type $FP$ over ${\mathbb Q}$, containing infinitely many conjugacy classes of subgroups isomorphic to $Q$ and finitely many conjugacy classes of other finite subgroups. Furthermore, $H$ is torsion-free, has rational cohomological dimension at most~4 and has integral cohomological dimension at most~5. \end{theorem}
\begin{proof} By Theorem~\ref{qacyc} there is a 3--dimensional ${\mathbb Q}$--acyclic simplicial complex $L$ with a cocompact ${\mathbb Z}\times Q$--action such that all stabilizers are finite, $L^Q=\emptyset$ and $L^P\neq \emptyset$ if $P<Q$. Take a flag triangulation of $L$, and consider the Bestvina--Brady group $H_L$. By Theorem~\ref{type}, the semi-direct product $H=H_L{:} {\mathbb Z}$ is $FP$ over ${\mathbb Q}$. By Theorem~\ref{conjclass} the group $G=H_L{:}({\mathbb Z}\times Q)$ contains infinitely many conjugacy classes of subgroups isomorphic to $Q$ and finitely many conjugacy classes of other finite subgroups. The rational cohomological dimension of $H_L$ is at most the dimension of the ${\mathbb Q}$--acyclic space $Y$ appearing in Section~\ref{bebr}, which is equal to the dimension of $L$, and the integral cohomological dimension of $H_L$ is at most the dimension of the space $X_L$, which is one more than the dimension of $L$. The cohomological dimension of $H_L{:} {\mathbb Z}$ over any ring is at most one more than the cohomological dimension of $H_L$ over the same ring. \end{proof}
\begin{proof}[Proof of Theorem~\ref{freepq}] For each $Q_i\in Q$, Theorem~\ref{fpq2} gives a group $G_i$ of type $FP$ over ${\mathbb Q}$ containing infinitely many conjugacy classes of subgroup isomorphic to $Q$ and only finitely many conjugacy classes of other finite subgroups. By Lemma~\ref{prodlem}, the free product $G=G_1*\cdots *G_n$ is $FP$ over ${\mathbb Q}$, contains infinitely many conjugacy classes of subgroups isomorphic to each $Q_i\in \cal Q$, and contains finitely many conjugacy classes of all other finite subgroups. \end{proof}
\begin{remark} One difference between Theorems \ref{freeprod}~and~\ref{freepq} is that each of the groups constructed in Theorem~\ref{freepq} has virtual cohomological dimension at most~5, whereas the virtual cohomological dimensions of the groups constructed in Theorem~\ref{freeprod} seem to depend on the list $\cal Q$. We do not know whether this necessarily happens, but the following proposition may be relevant. \end{remark}
\begin{proposition} \label{dimprop} Suppose that $G$ contains infinitely many conjugacy classes of subgroup isomorphic to $SL_n({\mathbb F}_p)$, and that $G$ acts cocompactly with finite stabilizers on a mod-$p$--acyclic simplicial complex $X$. Then $X$ must have dimension at least $n-1$. \end{proposition}
\begin{proof} There are only finitely many orbits in $X$, and hence only finitely many conjugacy classes of subgroup of $G$ can fix some point of $X$. It follows that there is a subgroup isomorphic to $SL_n({\mathbb F}_p)$ that has no fixed point, and we may apply Theorem~\ref{slnp} to deduce the required result. \end{proof}
\begin{remark} \label{qremark} If $G$ is virtually torsion-free and acts cocompactly with finite stablizers on a contractible simplicial complex $X$, then $G$ is of type $VF$. It seems to be unknown whether every group of type $VF$ admits such an action. It also seems to be unknown whether every group of type $FL$ over a prime field $F$ admits a free cocompact action on an $F$--acyclic simplicial complex $X$. If $F$ is not assumed to be a prime field, then there are counterexamples. In~\cite{epdn} we exhibited a group which is $FL$ over ${\mathbb C}$ but which is not $FL$ over ${\mathbb R}$. This group cannot admit a cocompact free action on any ${\mathbb C}$--acyclic simplicial complex $X$. \end{remark}
We conclude this section with a brief discussion of the Grothendieck group $K_0({\mathbb Q} G)$ of finitely generated projective modules for ${\mathbb Q} G$ and its connection with conjugacy classes of elements of finite order in $G$. First, we recall the definition of the Hattori--Stallings trace~\cite{bass}.
For any ring $R$, let $T(R)$ denote the quotient of $R$ by the additive subgroup generated by commutators of the form $rs-sr$ for $r,s\in R$. For a square matrix $A$ with coefficients in $R$, the Hattori--Stallings trace ${\mathrm{tr}}(A)$ is the element of $T(R)$ defined as the equivalence class containing the sum of the diagonal entries of $A$. As an element of $T(R)$, this satisfies the usual trace condition ${\mathrm{tr}}(AB)={\mathrm{tr}}(BA)$ for any matrices $A$ and $B$.
Now suppose that $P$ is a finitely generated projective $R$ module, and that $P$ is isomorphic to a summand of $R^n$. Pick an idempotent $n\times n$ matrix $e_P$ whose image is isomorphic to $P$. The Hattori--Stallings rank of $P$ is defined to be ${\mathrm{tr}}(e_P)$. It may be shown that this is independent of the choice of $n$ and $e_P$. The Hattori--Stallings rank defines a group homomorphism from $K_0(R)$ to $T(R)$.
\begin{theorem}\label{bass} For any group $G$, there is a subgroup of $K_0({\mathbb Q} G)$ which is free abelian of rank equal to the number of conjugacy classes of finite cyclic subgroups of $G$. \end{theorem}
\begin{proof} For the group algebra ${\mathbb Q} G$, the group $T({\mathbb Q} G)$ is the ${\mathbb Q}$--vector space with basis the conjugacy classes of elements of $G$. For any finite cyclic subgroup $C\leq G$, define an element $e_C\in {\mathbb Q} G$ by
$$e_C= \frac{1}{|C|}\sum_{g\in C} g.$$ The element $e_C$ is an idempotent, and the ${\mathbb Q} G$--module $P_C$ defined by $P_C={\mathbb Q} Ge_C$ is a projective ${\mathbb Q} G$--module. With respect to the basis for $T({\mathbb Q} G)$ given by the conjugacy classes of elements of $G$, the non-zero coefficients in the Hattori--Stallings trace for $e_C$ are those corresponding to elements of $C$. If $C_1,\ldots, C_n$ are pairwise non-conjugate finite cyclic subgroups of $G$, it follows that the Hattori--Stallings traces $e_{C_1},\ldots,e_{C_n}$ are linearly independent. It follows that the projectives of the form $P_C$ generate a subgroup of $K_0({\mathbb Q} G)$ which is free abelian of rank equal to the number of conjugacy classes of finite cyclic subgroups of $G$. \end{proof}
\begin{corollary} There are groups $G$ of type $VF$ for which $K_0({\mathbb Q} G)$ is not finitely generated. \end{corollary}
\begin{proof} Apply Theorem~\ref{bass} to the groups with infinitely
many conjugacy classes of finite cyclic subgroups constructed in Theorem~\ref{ivf}. \end{proof}
\section{Other properties of the groups}
Suppose that $Q$ is a group of automorphisms of a finite flag complex $L$ with~$n$ vertices. It is shown in~\cite{vfg} that in this case the group $G_L{:} Q$ is isomorphic to a subgroup of the special linear group $SL_{2n}({\mathbb Z})$. We do not know whether the groups $G_L{:}\Gamma$, for infinite $L$, are linear. Residual finiteness however is easier to establish.
\begin{lemma} \label{flagq} Suppose that $\Gamma$ is residually finite and that $\Gamma$ acts cocompactly and with finite stabilizers on a flag complex $L$. There is a finite-index normal subgroup $\Gamma'$ such that for any $\Gamma''\leq \Gamma'$, the quotient $L'=L/\Gamma''$ is a flag simplicial complex. \end{lemma}
\begin{proof} There are finitely many conjugacy classes of simplex stabilizer in $L$, and each simplex stabilizer is finite. It follows that there is a finite-index normal subgroup $\Gamma_1$ of $\Gamma$ that acts freely on $L$. Since $L$ is locally finite, there are only finitely many $\Gamma_1$--orbits of paths of length 1, 2 and 3 in the 1--skeleton of $L$. Hence we may pick $\Gamma_2$ of finite-index in $\Gamma_1$, such that no two points in the same $\Gamma_2$--orbit are joined by an edge path of length less than four. We claim that we may take $\Gamma'=\Gamma_2$.
If $\Gamma''$ is any subgroup of $\Gamma_2$, then there is no edge path of length less than four between any two vertices in the same $\Gamma''$--orbit. In particular, there can be no loops in $L/\Gamma'$. Hence every simplex of $L$ maps injectively to a subspace of $L/\Gamma''$. There can be no double edges in $L/\Gamma''$, since that would give rise to an edge path of length two between vertices in the same $\Gamma''$--orbit. Thus the 1--skeleton of $L/\Gamma''$ is a simplicial complex.
Now suppose that $\bar v_0,\ldots,\bar v_n$ are a mutually adjacent set of vertices of $L/\Gamma''$, and let $v_0$ be a lift of $\bar v_0$. There exists a unique lift $v_i$ of each $\bar v_i$ that is adjacent to $v_0$. For each $i\neq j$, there exists a unique $g\in \Gamma_2$ so that $v_i$ is adjacent to $gv_j$. But if $g\neq e$, then the path $(v_j,v_0,v_i,gv_j)$ gives rise to a contradiction. Thus the $v_i$ are all adjacent to each other, and so there is a simplex $\sigma$ of $L$ with vertex set $v_0,\ldots,v_n$. It follows that the quotient $L/\Gamma''$ contains a simplex $\bar \sigma$ spanning each complete subgraph of its 1--skeleton. Suppose that $\bar\sigma'$ is any simplex of $L/\Gamma''$ spanning the same complete subgraph as $\bar \sigma$. There is a unique lift $\sigma'$ of $\bar \sigma'$ containing $v_0$. If $\sigma'\neq \sigma$, then there exists $i$ and $g\neq e$ so that $gv_i$ is a vertex of $\sigma'$. But then there is an edge path of length 2 from $v_i$ to $gv_i$. Hence any finite full subgraph of the 1--skeleton of $L/\Gamma''$ is spanned by a unique simplex, and so $L/\Gamma''$ is a flag complex. \end{proof}
\begin{theorem} \label{resfin} Let $\Gamma$ be residually finite and let $\Gamma$ act cocompactly and with finite stabilizers on a flag complex $L$. Then the group $G_L{:} \Gamma$ is also residually finite. \end{theorem}
\begin{proof} Let $g$ be a non-identity element of $G_L{:}\Gamma$. Since $(G_L{:}\Gamma)/G_L$ is isomorphic to $\Gamma$, it suffices to consider the case when $g\in G_L$. Let $K$ be a finite full subcomplex of $L$ (ie, a subcomplex containing as many simplices as possible given its 0--skeleton) such that $g$ is in the subgroup generated by the vertices of $K$, and let $J$ be a finite full subcomplex of $L$ containing $K$ and every vertex adjacent to a vertex of $K$. Let $\Gamma'$ be a finite-index subgroup of $\Gamma$ as in Lemma~\ref{flagq}, and let $\Gamma''$ be a finite-index normal subgroup of $\Gamma$ contained in $\Gamma'$ such that any two vertices of $J$ lie in distinct $\Gamma''$--orbits. Now $M=L/\Gamma''$ is a finite flag complex, and $K$ maps to a full subcomplex of $L/\Gamma''$.
The group $\Gamma/\Gamma''$ acts on $M$, and $g$ has non-trivial image under the homomorphism $G_L{:}\Gamma\rightarrow G_M{:}(\Gamma/\Gamma'')$. Since this group is isomorphic to a subgroup of $SL_{2n}({\mathbb Z})$, where $n$ is the number of vertices of $M$ (see corollary~8 of \cite{vfg}), it follows that there is a finite quotient of $G_L{:}\Gamma$ in which the image of $g$ is non-zero. \end{proof}
In the special case when $\Gamma={\mathbb Z}$ (which is the main case used earlier in the paper), we shall show how to describe the group $G_L{:}\Gamma$ as the fundamental group of a finite locally CAT(0) cube complex. First we present two lemmas concerning right-angled Artin groups.
\begin{lemma}\label{artsub} Let $N$ be a full subcomplex of a flag
complex $M$. The inclusion $i\co N\rightarrow M$ induces a split injection $G_N\rightarrow G_M$. \end{lemma}
\begin{proof} The quotient of $G_M$ by the subgroup generated by the vertices of $M-N$ is naturally isomorphic to $G_N$. \end{proof}
\begin{lemma} \label{artgps} Let the flag complex $K$ be expressed as $K= L\cup M$, where $L$ and $M$ are full subcomplexes with $N=L\cap M$. Then the group homomorphisms induced by the inclusion of each subcomplex in $K$ induce an isomorphism $G_L*_{G_N}G_M\rightarrow G_K$. \end{lemma}
\begin{proof} Immediate from the presentations of the groups, given the result of Lemma~\ref{artsub}. \end{proof}
\begin{theorem} \label{fpamalgam} Let $\Gamma$ be an infinite cyclic group generated by $\gamma$, let $\Gamma$ act on the flag complex $L$, and let $M$ be a `fundamental domain' for $\Gamma$ in the sense that $L=\bigcup_i \gamma^i M$. Define subcomplexes $N_0$ and $N_1$ by $$N_0= \gamma^{-1}M \cap M,\qquad N_1= M\cap \gamma M.$$ Then $G_L{:} \Gamma$ is isomorphic to the HNN--extension $G_M*_{G_{N_0}=G_{N_1}}$. (In this HNN--extension, the base group is $G_M$, and the stable letter conjugates the subgroup $G_{N_0}$ to the subgroup $G_{N_1}$ by the map induced by $\gamma\co N_0\rightarrow N_1$.) \end{theorem}
\begin{proof} Let $t$ denote the stable letter in the HNN--extension, and consider the homomorphism $\phi$ from the HNN--extension to ${\mathbb Z}$ that sends $t$ to $1$ and sends each element of $G_M$ to $0$. The kernel of $\phi$ is an infinite free product with amalgamation: $$ \cdots *G_{-2}*_{H_{-1}}G_{-1}*_{H_0}G_0*_{H_1} G_1*_{H_2}G_2*\cdots,$$ where $G_i$ denotes $t^{i}G_Mt^{-i}$, and $H_i$ denotes $t^{i}G_{N_0}t^{-i}$. If we define $M_i= \gamma^iM$ and $N_i=\gamma^iN_0$, there is an isomorphism $\psi_i\co G_i\rightarrow G_{M_i}$ defined as the composite $$G_i \mapright{c(t^{-i})} G_0 \mapright{1} G_M \mapright{c(\gamma^i)} G_{M_i}$$ of conjugation by $t^{-i}$ followed by the identification of $G_0$ and $G_M$, followed by conjugation by $\gamma^i$. Each of $\psi_i$ and $\psi_{i-1}$ induces an isomorphism from $H_i$ to $H_{N_i}$, and these two are the same isomorphism. The $\psi_i$ therefore fit together to make an isomorphism from $\ker(\phi)$, described as an infinite free product with amalgamation, to the following infinite free product with amalgamation: $$ \cdots *G_{M_{-2}}*_{H_{N_{-1}}}G_{M_{-1}}*_{H_{N_0}} G_{M_0}*_{H_{N_1}}G_{M_1}*_{H_{N_2}}G_{M_2}*\cdots.$$ Furthermore, this isomorphism is equivariant for the ${\mathbb Z}$--actions given by conjugation by powers of $t$ and $\gamma$. By Lemma~\ref{artgps}, the inclusions of the $G_{M_i}$ in $G_L$ induce a $\Gamma$--equivariant isomorphism between the second free product with amalgamation and $G_L$. Hence we obtain an isomorphism $G_M*_{G_{N_0}=G_{N_1}}\rightarrow G_L{:} \Gamma$ as required. \end{proof}
\begin{corollary} \label{cubecclem} Under the hypotheses of Theorem~\ref{fpamalgam}, the group $G_L{:}\Gamma$ is the fundamental group of a finite locally {\rm CAT(0)} cube complex. \end{corollary}
\begin{proof} For any flag complex $K$, let $Y_K$ denote the explicit model for the classifying space $BG_K$ described in Section~\ref{bebr}, so that $Y_K=X_K/G_K$. The naturality properties of this construction are such that $Y_{N_0}$ and $Y_{N_1}$ are subcomplexes of $Y_M$. We construct a model $Z$ for $B(G_L{:}\Gamma)$ from $Y_M$ and $Y_{N_0}\times I$ by identifying $\{0\}\times Y_{N_0}$ with $Y_{N_0}\subseteq Y_M$ via the identity map and identifying $\{1\}\times Y_{N_0}$ with $Y_{N_1}\subseteq Y_M$ via the action of $\gamma$ which gives an isomorphism from $N_0$ to $N_1$.
The space $Z$ as above is a model for $B(G_L{:} \Gamma)$. To see that $Z$ has the structure of a locally CAT(0) cube complex, one may either quote a gluing lemma (such as in \cite{BrHa}, proposition II.11.13), or one may show that the link of the unique vertex in $Z$ is a flag complex, which suffices by Gromov's lemma (\cite{BrHa}, theorem II.5.18). For any flag complex $K$, the link of the unique vertex in $Y_K$ is a flag complex $S(K)$, which is a sort of `double' of $K$: each vertex $v$ of $K$ corresponds to two vertices $v'$, $v''$ of $S(K)$, and a set of vertices of $S(K)$ is the vertex set of an $n$--simplex in $S(K)$ if and only if its image in the vertex set of $K$ is the vertex set of an $n$--simplex. (For example, in the case when $K$ is a 2--simplex, $S(K)$ is the boundary of an octahedron.) The link of the vertex in $Z$ is isomorphic to $S(M)$ with a cone attached to each of the subspaces $S(N_0)$ and $S(N_1)$, and hence it is a flag complex. \end{proof}
\begin{corollary} Each of the groups $G_L{:}({\mathbb Z}\times Q)$ constructed in Section~\ref{examps} acts cocompactly with finite stabilizers on some {\rm CAT(0)} cube complex. In particular, there is a model for the universal space for proper actions of $G_L{:}({\mathbb Z}\times Q)$ which has finitely many orbits of cells. \end{corollary}
\begin{proof} Take a finite `fundamental domain', $M'$, for the action of ${\mathbb Z}$ on $L$ (as in the statement of Theorem~\ref{fpamalgam}). In case $M'$ is not $Q$--invariant, replace $M'$ by $M= \bigcup_{q\in Q} qM'$. For this choice of $M$, there is a base-point preserving cellular $Q$--action on $Z$, the model for $B(G_L{:} {\mathbb Z})$ constructed in Corollary~\ref{cubecclem}. This induces the required action of $G_L{:}({\mathbb Z}\times Q)$ on the universal cover of $Z$. Whenever a group $H$ acts with finite stabilizers on a CAT(0) cube complex, that space is a model for the universal space for proper actions of $H$ \cite{vfg}. \end{proof}
\end{document} | arXiv |
Wagstaff prime
In number theory, a Wagstaff prime is a prime number of the form
${{2^{p}+1} \over 3}$
Wagstaff prime
Named afterSamuel S. Wagstaff, Jr.
Publication year1989[1]
Author of publicationBateman, P. T., Selfridge, J. L., Wagstaff Jr., S. S.
No. of known terms44
First terms3, 11, 43, 683
Largest known term(215135397+1)/3
OEIS index
• A000979
• Wagstaff primes: primes of form (2^p + 1)/3
where p is an odd prime. Wagstaff primes are named after the mathematician Samuel S. Wagstaff Jr.; the prime pages credit François Morain for naming them in a lecture at the Eurocrypt 1990 conference. Wagstaff primes appear in the New Mersenne conjecture and have applications in cryptography.
Examples
The first three Wagstaff primes are 3, 11, and 43 because
${\begin{aligned}3&={2^{3}+1 \over 3},\\[5pt]11&={2^{5}+1 \over 3},\\[5pt]43&={2^{7}+1 \over 3}.\end{aligned}}$
Known Wagstaff primes
The first few Wagstaff primes are:
3, 11, 43, 683, 2731, 43691, 174763, 2796203, 715827883, 2932031007403, 768614336404564651, … (sequence A000979 in the OEIS)
As of January 2023, known exponents which produce Wagstaff primes or probable primes are:
3, 5, 7, 11, 13, 17, 19, 23, 31, 43, 61, 79, 101, 127, 167, 191, 199, 313, 347, 701, 1709, 2617, 3539, 5807, 10501, 10691, 11279, 12391, 14479, 42737, 83339, 95369, 117239, 127031[2] (all known Wagstaff primes)
138937, 141079, 267017, 269987, 374321, 986191, 4031399, …, 13347311, 13372531, 15135397 (Wagstaff probable primes) (sequence A000978 in the OEIS)
In February 2010, Tony Reix discovered the Wagstaff probable prime:
${\frac {2^{4031399}+1}{3}}$
which has 1,213,572 digits and was the 3rd biggest probable prime ever found at this date.[3]
In September 2013, Ryan Propper announced the discovery of two additional Wagstaff probable primes:[4]
${\frac {2^{13347311}+1}{3}}$
and
${\frac {2^{13372531}+1}{3}}$
Each is a probable prime with slightly more than 4 million decimal digits. It is not currently known whether there are any exponents between 4031399 and 13347311 that produce Wagstaff probable primes.
In June 2021, Ryan Propper announced the discovery of the Wagstaff probable prime:[5]
${\frac {2^{15135397}+1}{3}}$
which is a probable prime with slightly more than 4.5 million decimal digits.
Primality testing
Primality has been proven or disproven for the values of p up to 127031. Those with p > 127031 are probable primes as of January 2023. The primality proof for p = 42737 was performed by François Morain in 2007 with a distributed ECPP implementation running on several networks of workstations for 743 GHz-days on an Opteron processor.[6] It was the third largest primality proof by ECPP from its discovery until March 2009.[7]
The Lucas–Lehmer–Riesel test can be used to identify Wagstaff PRPs. In particular, if p is an exponent of a Wagstaff prime, then
$25^{2^{\!\;p-1}}\equiv 25{\pmod {2^{p}+1}}$.[8]
Generalizations
It is natural to consider[9] more generally numbers of the form
$Q(b,n)={\frac {b^{n}+1}{b+1}}$
where the base $b\geq 2$. Since for $n$ odd we have
${\frac {b^{n}+1}{b+1}}={\frac {(-b)^{n}-1}{(-b)-1}}=R_{n}(-b)$
these numbers are called "Wagstaff numbers base $b$", and sometimes considered[10] a case of the repunit numbers with negative base $-b$.
For some specific values of $b$, all $Q(b,n)$ (with a possible exception for very small $n$) are composite because of an "algebraic" factorization. Specifically, if $b$ has the form of a perfect power with odd exponent (like 8, 27, 32, 64, 125, 128, 216, 243, 343, 512, 729, 1000, etc. (sequence A070265 in the OEIS)), then the fact that $x^{m}+1$, with $m$ odd, is divisible by $x+1$ shows that $Q(a^{m},n)$ is divisible by $a^{n}+1$ in these special cases. Another case is $b=4k^{4}$, with k a positive integer (like 4, 64, 324, 1024, 2500, 5184, etc. (sequence A141046 in the OEIS)), where we have the aurifeuillean factorization.
However, when $b$ does not admit an algebraic factorization, it is conjectured that an infinite number of $n$ values make $Q(b,n)$ prime, notice all $n$ are odd primes.
For $b=10$, the primes themselves have the following appearance: 9091, 909091, 909090909090909091, 909090909090909090909090909091, … (sequence A097209 in the OEIS), and these ns are: 5, 7, 19, 31, 53, 67, 293, 641, 2137, 3011, 268207, ... (sequence A001562 in the OEIS).
See Repunit#Repunit primes for the list of the generalized Wagstaff primes base $b$. (Generalized Wagstaff primes base $b$ are generalized repunit primes base $-b$ with odd $n$)
The least primes p such that $Q(n,p)$ is prime are (starts with n = 2, 0 if no such p exists)
3, 3, 3, 5, 3, 3, 0, 3, 5, 5, 5, 3, 7, 3, 3, 7, 3, 17, 5, 3, 3, 11, 7, 3, 11, 0, 3, 7, 139, 109, 0, 5, 3, 11, 31, 5, 5, 3, 53, 17, 3, 5, 7, 103, 7, 5, 5, 7, 1153, 3, 7, 21943, 7, 3, 37, 53, 3, 17, 3, 7, 11, 3, 0, 19, 7, 3, 757, 11, 3, 5, 3, ... (sequence A084742 in the OEIS)
The least bases b such that $Q(b,prime(n))$ is prime are (starts with n = 2)
2, 2, 2, 2, 2, 2, 2, 2, 7, 2, 16, 61, 2, 6, 10, 6, 2, 5, 46, 18, 2, 49, 16, 70, 2, 5, 6, 12, 92, 2, 48, 89, 30, 16, 147, 19, 19, 2, 16, 11, 289, 2, 12, 52, 2, 66, 9, 22, 5, 489, 69, 137, 16, 36, 96, 76, 117, 26, 3, ... (sequence A103795 in the OEIS)
References
1. Bateman, P. T.; Selfridge, J. L.; Wagstaff, Jr., S. S. (1989). "The New Mersenne Conjecture". American Mathematical Monthly. 96: 125–128. doi:10.2307/2323195. JSTOR 2323195.
2. "The Top Twenty: Wagstaff".
3. "Henri & Renaud Lifchitz's PRP Top records". www.primenumbers.net. Retrieved 2021-11-13.
4. New Wagstaff PRP exponents, mersenneforum.org
5. Announcing a new Wagstaff PRP, mersenneforum.org
6. Comment by François Morain, The Prime Database: (242737 + 1)/3 at The Prime Pages.
7. Caldwell, Chris, "The Top Twenty: Elliptic Curve Primality Proof", The Prime Pages
8. Lifchitz, Renaud; Lifchitz, Henri (May 18, 2002) [July 2000]. "An efficient probable prime test for numbers of the form (2p + 1)/3" (PDF). Retrieved 2023-04-12.
9. Dubner, H. and Granlund, T.: Primes of the Form (bn + 1)/(b + 1), Journal of Integer Sequences, Vol. 3 (2000)
10. Repunit, Wolfram MathWorld (Eric W. Weisstein)
External links
• John Renze and Eric W. Weisstein. "Wagstaff prime". MathWorld.
• Chris Caldwell, The Top Twenty: Wagstaff at The Prime Pages.
• Renaud Lifchitz, "An efficient probable prime test for numbers of the form (2p + 1)/3".
• Tony Reix, "Three conjectures about primality testing for Mersenne, Wagstaff and Fermat numbers based on cycles of the Digraph under x2 − 2 modulo a prime".
• List of repunits in base -50 to 50
• List of Wagstaff primes base 2 to 160
Prime number classes
By formula
• Fermat (22n + 1)
• Mersenne (2p − 1)
• Double Mersenne (22p−1 − 1)
• Wagstaff (2p + 1)/3
• Proth (k·2n + 1)
• Factorial (n! ± 1)
• Primorial (pn# ± 1)
• Euclid (pn# + 1)
• Pythagorean (4n + 1)
• Pierpont (2m·3n + 1)
• Quartan (x4 + y4)
• Solinas (2m ± 2n ± 1)
• Cullen (n·2n + 1)
• Woodall (n·2n − 1)
• Cuban (x3 − y3)/(x − y)
• Leyland (xy + yx)
• Thabit (3·2n − 1)
• Williams ((b−1)·bn − 1)
• Mills (⌊A3n⌋)
By integer sequence
• Fibonacci
• Lucas
• Pell
• Newman–Shanks–Williams
• Perrin
• Partitions
• Bell
• Motzkin
By property
• Wieferich (pair)
• Wall–Sun–Sun
• Wolstenholme
• Wilson
• Lucky
• Fortunate
• Ramanujan
• Pillai
• Regular
• Strong
• Stern
• Supersingular (elliptic curve)
• Supersingular (moonshine theory)
• Good
• Super
• Higgs
• Highly cototient
• Unique
Base-dependent
• Palindromic
• Emirp
• Repunit (10n − 1)/9
• Permutable
• Circular
• Truncatable
• Minimal
• Delicate
• Primeval
• Full reptend
• Unique
• Happy
• Self
• Smarandache–Wellin
• Strobogrammatic
• Dihedral
• Tetradic
Patterns
• Twin (p, p + 2)
• Bi-twin chain (n ± 1, 2n ± 1, 4n ± 1, …)
• Triplet (p, p + 2 or p + 4, p + 6)
• Quadruplet (p, p + 2, p + 6, p + 8)
• k-tuple
• Cousin (p, p + 4)
• Sexy (p, p + 6)
• Chen
• Sophie Germain/Safe (p, 2p + 1)
• Cunningham (p, 2p ± 1, 4p ± 3, 8p ± 7, ...)
• Arithmetic progression (p + a·n, n = 0, 1, 2, 3, ...)
• Balanced (consecutive p − n, p, p + n)
By size
• Mega (1,000,000+ digits)
• Largest known
• list
Complex numbers
• Eisenstein prime
• Gaussian prime
Composite numbers
• Pseudoprime
• Catalan
• Elliptic
• Euler
• Euler–Jacobi
• Fermat
• Frobenius
• Lucas
• Somer–Lucas
• Strong
• Carmichael number
• Almost prime
• Semiprime
• Sphenic number
• Interprime
• Pernicious
Related topics
• Probable prime
• Industrial-grade prime
• Illegal prime
• Formula for primes
• Prime gap
First 60 primes
• 2
• 3
• 5
• 7
• 11
• 13
• 17
• 19
• 23
• 29
• 31
• 37
• 41
• 43
• 47
• 53
• 59
• 61
• 67
• 71
• 73
• 79
• 83
• 89
• 97
• 101
• 103
• 107
• 109
• 113
• 127
• 131
• 137
• 139
• 149
• 151
• 157
• 163
• 167
• 173
• 179
• 181
• 191
• 193
• 197
• 199
• 211
• 223
• 227
• 229
• 233
• 239
• 241
• 251
• 257
• 263
• 269
• 271
• 277
• 281
List of prime numbers
| Wikipedia |
Why does gravity act at the center of mass?
Sorry if this is a trivial question.
If we have a solid $E$, shouldn't gravity act on all the points $(x,y,z)$ in $E$? Why then when we do problems we only only consider the weight force from the center of mass?
homework-and-exercises newtonian-mechanics newtonian-gravity
Ahmed S. AttaallaAhmed S. Attaalla
$\begingroup$ Closely related: physics.stackexchange.com/q/151402 $\endgroup$ – dmckee♦ Apr 26 '17 at 3:43
$\begingroup$ It doesn't. As you've stated gravity acts on all points. However, the problem can be modeled as an equivalent system where the net force acts upon the COM of the object. ubuntu_noob shows the mathematical proof. The key here is modeling--a simplified problem that represents the behavior of reality. $\endgroup$ – James Apr 26 '17 at 12:57
$\begingroup$ It doesn't. That's one of the reasons why we have tides. The Moon pulls water closer to it a bit more strongly than it does the center of Earth and the water on the opposite side a bit more weakly. That's why sea levels rise a bit on both sides in comparison to what it is in-between. $\endgroup$ – Jyrki Lahtonen Apr 26 '17 at 13:46
$\begingroup$ And the reason "using the COM" doesn't work on the seas is (as noted in @anaximander's comment to ubuntu_noob's answer) is that they aren't a rigid body. Modelling gravity through the COM only works when the object itself won't deform. $\endgroup$ – TripeHound Apr 27 '17 at 7:14
$\begingroup$ As stated above, gravity acts on all points of an object. But since the force experienced is proportional to the mass of some element in a body (or a particle of a multi-particle system), and in calculating the centre of mass the contribution from the element is also protportional to its mass, the total force experienced by a multi-particle system is equivalent to one acting at the 'COM' on a mass equal to the total mass of the system. If the force were proportional to mass squared, this wouldn't be the case. $\endgroup$ – 21joanna12 Apr 27 '17 at 13:39
Suppose I have a collection of n vectors $x_i\quad \forall i\in(1,n), i\in \mathbb{Z}$ such that the corresponding masses at each $x_i$ is $m_i$. This is your body $E$ and if the total mass of your body is $M$, then $$M=\sum_{i=1}^{n}m_i$$ In that case, if $E$ is subjected to a uniform acceleration field $\vec{g}$, as specified in the answer above, then the net force acting on the body is $$F=\sum_{i=1}^{n}m_i \ddot{x}_i$$ But, the force on the entire body would be $F=Mg$. Let there be a point $X$ on the body such that I can say that $\ddot{X}=g$, Then I can write $F= M\ddot{X}=\sum_{i=1}^{n}m_i \ddot{x}_i$.
From this you can interpret that $$\ddot{X}=\frac{\sum_{i=1}^{n}m_i \ddot{x}_i}{\sum_{i=1}^{n}m_i}$$ And the centre of mass is defined as $$\begin{equation}\label{com} x_{com}=\frac{\sum_{i=1}^{n}m_ix_i}{\sum_{i=1}^{n}m_i} \end{equation}$$ Since the body $E$ has constant mass, you can get the definition of center of mass above by simple integration.
Gabriel Sandoval
ubuntu_noobubuntu_noob
$\begingroup$ To simplify and summarise: given a rigid body in a uniform gravitational field, considering the entire weight force as being applied at the centre of mass is mathematically equivalent to having that weight spread out and applied all over the body. It's also easier to work with, because it avoids pesky things like infinitesimals and integration over areas. $\endgroup$ – anaximander Apr 26 '17 at 10:40
$\begingroup$ @anaximander Your comment stresses rigid body and that's spot on. Add some elasticity and the object starts to act as if it had multiple centers of mass. Searching for 'rigid body physics' will probably get you some nice background info. $\endgroup$ – Stijn de Witt Apr 26 '17 at 13:47
This is only true in a uniform field, and this is why: the center of mass is the average mass weighted position of an extended object. Meanwhile, the total gravitational force is the sum over all parts of the object, weighted by mass: the mass weighted integrals for the average and the sum are the same. In reality the center of gravity differs from the center of mass, since a variable gravitational field changes the later sum over parts. If you don't believe me, look at NASA's SRTM data, in which a radar antenna on a 200 ft boom in the Space Shuttle made a radar map of Earth--since the Earth's field is not uniform, the center of gravity was lower than the center of mass, and the thing kept rolling (imagine a dumbbell at 45 degrees)--thrust corrections left the boom oscillating, and that can be seen as a ripple in the map elevation.
JEBJEB
Consider two situations as shown below.
One (left diagram) where two masses are in a non-uniform gravitational field and the other (right diagram) where the two masses are in a unifgorm gravitational field.
In both cases the centre of mass is midway between the two masses $M$.
In the non uniform gravitational field the gravitation attraction on the mass closest to the large mass $W$ is bigger than the gravitational attraction on the other mass $w$, so the centre of gravity of the two masses is at position $G$ which is not midway between the two masses.
In the uniform gravitational field the gravitational attraction on the two masses $W$ is the same for the centre of gravity of the two masses is midway between them $G$ which is the same as the position of the centre of mass.
To be sure that the centre of mass and the centre of gravity are the same point the masses must be in a uniform gravitational field.
FarcherFarcher
The only time we need to know where a force acts is when we are calculating a torque. For contact forces, it is clear that the force acts at the point of contact. But for a force like gravity, that acts at a distance, it is less clear.
In reality, a rigid object is made up of many particles, and there is a small gravitational force and torque on each of them. When we only care about acceleration we only need the sum of all these forces, which is $\vec{F}_{tot} = \sum_i m_i \vec{g}= M\vec{g}$. But what about the torques?
We would like to pretend that this total gravitational force acts at a single point for the purpose of calculating torque. Is there a point $\vec{x}_{cg}$ such that $\vec{x}_{cg}\times \vec{F}_{tot}$ gives the same total torque as summing up all the small torques?
If we do sum up all the torques we find $\vec{\tau}_{tot} = \sum_i \vec{x}_i\times (m_i\vec{g}) = \left(\frac{1}{M}\sum_i m_i \vec{x}_i\right) \times (M\vec{g})$. This tells us to call $\vec{x}_{cg} = \frac{1}{M}\sum_i m_i \vec{x}_i$ the center of gravity, and if we pretend that the total force of gravity acts at this point, it will always give us the right answer for the gravitational torque. Finally, we notice that it happens to have the same form as the definition of the center of mass!
However! If you do the calculation yourself you might notice that if $\vec{g}$ varies from particle to particle then this derivation does not work. In this case the center of gravity is not actually well defined. There may be no $\vec{x}_{cg}$ that does what we want, and even if there is it is not unique, except in a few special cases.
Luke PritchettLuke Pritchett
In actuality gravity does act on all parts of a mass independently. For a solid or liquid the parts then act on each other. And if the size of the mass is small compared to the non-linearity of the gravitational field, the "mass effect" is an average of all the independent effects, hence you can model it as occurring "at the center of gravity".
On the other hand, if the size of the object is large compared to the non-linearity of the gravitation field, one would need to do an integral of the mass across the field to take into account what would be called "tidal effects". A very long object falling into a black hole would be pulled apart as the stress from the different forces across the object interact. This is easier to understand if one thinks of the long object as being a stream of water (very little tensile strength)
Charles JacksCharles Jacks
Gravity does pull in each and every particle in the object.
If you hold the object in one random point, then gravity pulls in the particles to the left and in the particles to the right of this point. It makes a torque at each of these particles. If there is more total torque in the left side than in the right, then the object rotates counterclockwise.
Choose another point and the torques are spread differently. Only at one specific point do the left and right side torques exactly balance out. We give this point a name: Centre of mass.
A force acting on the Centre of mass does not make the object rotate, since it creates no torque around this already balanced point. Since a free falling object pulled in by gravity only does not rotate, it makes sense to say that the gravitational pull in each particle averages out to be one big gravitational pull in the Centre of mass - especially because we want to/have to simplify the gravitational force into one force at one point to make things easier to work with than many, many tiny forces in each and every particle.
SteevenSteeven
Gravity acts on all parts of the body that have mass not just the COM. However the vector for which you calculate the effect of gravity can be applied between the COM of different bodies because mathematically it reduces to just the calculation between COMs.
Sarit SotangkurSarit Sotangkur
protected by Qmechanic♦ Apr 26 '17 at 19:23
Not the answer you're looking for? Browse other questions tagged homework-and-exercises newtonian-mechanics newtonian-gravity or ask your own question.
Why Pseudo/Fictious Forces act at centre of mass
Why is center of gravity a different quantity than center of mass?
What's the difference between centre of mass & centre of gravity for massive bodies?
Gravitational force of point mass on a rigid body - Integral proof
Center of Gravity vs Center of Mass
Center Of Mass Troubles
Equilibrium using Center of Buoyancy and Gravity (Problem)
Kettlebell squats center of mass
Why can center of mass be used in calculating gravity?
Does the "center" of the gravitational field change as we approach the center of the mass?
How does the center of gravity work?
Center of Mass/Torque
Difference between center of mass and center of gravity | CommonCrawl |
Barcan formula
In quantified modal logic, the Barcan formula and the converse Barcan formula (more accurately, schemata rather than formulas) (i) syntactically state principles of interchange between quantifiers and modalities; (ii) semantically state a relation between domains of possible worlds. The formulas were introduced as axioms by Ruth Barcan Marcus, in the first extensions of modal propositional logic to include quantification.[1]
Related formulas include the Buridan formula.
The Barcan formula
The Barcan formula is:
$\forall x\Box Fx\rightarrow \Box \forall xFx$.
In English, the schema reads: If every x is necessarily F, then it is necessary that every x is F. It is equivalent to
$\Diamond \exists xFx\to \exists x\Diamond Fx$.
The Barcan formula has generated some controversy because—in terms of possible world semantics—it implies that all objects which exist in any possible world (accessible to the actual world) exist in the actual world, i.e. that domains cannot grow when one moves to accessible worlds. This thesis is sometimes known as actualism—i.e. that there are no merely possible individuals. There is some debate as to the informal interpretation of the Barcan formula and its converse.
An informal argument against the plausibility of the Barcan formula would be the interpretation of the predicate Fx as "x is a machine that can tap all the energy locked in the waves of the Atlantic Ocean in a practical and efficient way". In its equivalent form above, the antecedent $\Diamond \exists xFx$ seems plausible since it is at least theoretically possible that such a machine could exist. However, it is not obvious that this implies that there exists a machine that possibly could tap the energy of the Atlantic.
Converse Barcan formula
The converse Barcan formula is:
$\Box \forall xFx\rightarrow \forall x\Box Fx$.
It is equivalent to
$\exists x\Diamond Fx\to \Diamond \exists xFx$.
If a frame is based on a symmetric accessibility relation, then the Barcan formula will be valid in the frame if, and only if, the converse Barcan formula is valid in the frame. It states that domains cannot shrink as one moves to accessible worlds, i.e. that individuals cannot cease to exist. The converse Barcan formula is taken to be more plausible than the Barcan formula.
See also
• Commutative property
References
1. Journal of Symbolic Logic (1946),11 and (1947), 12 under Ruth C. Barcan
External links
• Barcan both ways by Melvin Fitting
• Contingent Objects and the Barcan Formula by Hayaki Reina
| Wikipedia |
Project acronym 3DBrainStrom
Project Brain metastases: Deciphering tumor-stroma interactions in three dimensions for the rational design of nanomedicines
Researcher (PI) Ronit Satchi Fainaro
Host Institution (HI) TEL AVIV UNIVERSITY
Summary Brain metastases represent a major therapeutic challenge. Despite significant breakthroughs in targeted therapies, survival rates of patients with brain metastases remain poor. Nowadays, discovery, development and evaluation of new therapies are performed on human cancer cells grown in 2D on rigid plastic plates followed by in vivo testing in immunodeficient mice. These experimental settings are lacking and constitute a fundamental hurdle for the translation of preclinical discoveries into clinical practice. We propose to establish 3D-printed models of brain metastases (Aim 1), which include brain extracellular matrix, stroma and serum containing immune cells flowing in functional tumor vessels. Our unique models better capture the clinical physio-mechanical tissue properties, signaling pathways, hemodynamics and drug responsiveness. Using our 3D-printed models, we aim to develop two new fronts for identifying novel clinically-relevant molecular drivers (Aim 2) followed by the development of precision nanomedicines (Aim 3). We will exploit our vast experience in anticancer nanomedicines to design three therapeutic approaches that target various cellular compartments involved in brain metastases: 1) Prevention of brain metastatic colonization using targeted nano-vaccines, which elicit antitumor immune response; 2) Intervention of tumor-brain stroma cells crosstalk when brain micrometastases establish; 3) Regression of macrometastatic disease by selectively targeting tumor cells. These approaches will materialize using our libraries of polymeric nanocarriers that selectively accumulate in tumors. This project will result in a paradigm shift by generating new preclinical cancer models that will bridge the translational gap in cancer therapeutics. The insights and tumor-stroma-targeted nanomedicines developed here will pave the way for prediction of patient outcome, revolutionizing our perception of tumor modelling and consequently the way we prevent and treat cancer.
Brain metastases represent a major therapeutic challenge. Despite significant breakthroughs in targeted therapies, survival rates of patients with brain metastases remain poor. Nowadays, discovery, development and evaluation of new therapies are performed on human cancer cells grown in 2D on rigid plastic plates followed by in vivo testing in immunodeficient mice. These experimental settings are lacking and constitute a fundamental hurdle for the translation of preclinical discoveries into clinical practice. We propose to establish 3D-printed models of brain metastases (Aim 1), which include brain extracellular matrix, stroma and serum containing immune cells flowing in functional tumor vessels. Our unique models better capture the clinical physio-mechanical tissue properties, signaling pathways, hemodynamics and drug responsiveness. Using our 3D-printed models, we aim to develop two new fronts for identifying novel clinically-relevant molecular drivers (Aim 2) followed by the development of precision nanomedicines (Aim 3). We will exploit our vast experience in anticancer nanomedicines to design three therapeutic approaches that target various cellular compartments involved in brain metastases: 1) Prevention of brain metastatic colonization using targeted nano-vaccines, which elicit antitumor immune response; 2) Intervention of tumor-brain stroma cells crosstalk when brain micrometastases establish; 3) Regression of macrometastatic disease by selectively targeting tumor cells. These approaches will materialize using our libraries of polymeric nanocarriers that selectively accumulate in tumors. This project will result in a paradigm shift by generating new preclinical cancer models that will bridge the translational gap in cancer therapeutics. The insights and tumor-stroma-targeted nanomedicines developed here will pave the way for prediction of patient outcome, revolutionizing our perception of tumor modelling and consequently the way we prevent and treat cancer.
Project acronym ANYONIC
Project Statistics of Exotic Fractional Hall States
Researcher (PI) Mordehai HEIBLUM
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Summary Since their discovery, Quantum Hall Effects have unfolded intriguing avenues of research, exhibiting a multitude of unexpected exotic states: accurate quantized conductance states; particle-like and hole-conjugate fractional states; counter-propagating charge and neutral edge modes; and fractionally charged quasiparticles - abelian and (predicted) non-abelian. Since the sought-after anyonic statistics of fractional states is yet to be verified, I propose to launch a thorough search for it employing new means. I believe that our studies will serve the expanding field of the emerging family of topological materials. Our on-going attempts to observe quasiparticles (qp's) interference, in order to uncover their exchange statistics (under ERC), taught us that spontaneous, non-topological, 'neutral edge modes' are the main culprit responsible for qp's dephasing. In an effort to quench the neutral modes, we plan to develop a new class of micro-size interferometers, based on synthetically engineered fractional modes. Flowing away from the fixed physical edge, their local environment can be controlled, making it less hospitable for the neutral modes. Having at hand our synthetized helical-type fractional modes, it is highly tempting to employ them to form localize para-fermions, which will extend the family of exotic states. This can be done by proximitizing them to a superconductor, or gapping them via inter-mode coupling. The less familiar thermal conductance measurements, which we recently developed (under ERC), will be applied throughout our work to identify 'topological orders' of exotic states; namely, distinguishing between abelian and non-abelian fractional states. The proposal is based on an intensive and continuous MBE effort, aimed at developing extremely high purity, GaAs based, structures. Among them, structures that support our new synthetic modes that are amenable to manipulation, and others that host rare exotic states, such as v=5/2, 12/5, 19/8, and 35/16.
Since their discovery, Quantum Hall Effects have unfolded intriguing avenues of research, exhibiting a multitude of unexpected exotic states: accurate quantized conductance states; particle-like and hole-conjugate fractional states; counter-propagating charge and neutral edge modes; and fractionally charged quasiparticles - abelian and (predicted) non-abelian. Since the sought-after anyonic statistics of fractional states is yet to be verified, I propose to launch a thorough search for it employing new means. I believe that our studies will serve the expanding field of the emerging family of topological materials. Our on-going attempts to observe quasiparticles (qp's) interference, in order to uncover their exchange statistics (under ERC), taught us that spontaneous, non-topological, 'neutral edge modes' are the main culprit responsible for qp's dephasing. In an effort to quench the neutral modes, we plan to develop a new class of micro-size interferometers, based on synthetically engineered fractional modes. Flowing away from the fixed physical edge, their local environment can be controlled, making it less hospitable for the neutral modes. Having at hand our synthetized helical-type fractional modes, it is highly tempting to employ them to form localize para-fermions, which will extend the family of exotic states. This can be done by proximitizing them to a superconductor, or gapping them via inter-mode coupling. The less familiar thermal conductance measurements, which we recently developed (under ERC), will be applied throughout our work to identify 'topological orders' of exotic states; namely, distinguishing between abelian and non-abelian fractional states. The proposal is based on an intensive and continuous MBE effort, aimed at developing extremely high purity, GaAs based, structures. Among them, structures that support our new synthetic modes that are amenable to manipulation, and others that host rare exotic states, such as v=5/2, 12/5, 19/8, and 35/16.
Project acronym ATOP
Project Atomically-engineered nonlinear photonics with two-dimensional layered material superlattices
Researcher (PI) zhipei SUN
Summary The project aims at introducing a paradigm shift in the development of nonlinear photonics with atomically-engineered two-dimensional (2D) van der Waals superlattices (2DSs). Monolayer 2D materials have large optical nonlinear susceptibilities, a few orders of magnitude larger than typical traditional bulk materials. However, nonlinear frequency conversion efficiency of monolayer 2D materials is typically weak mainly due to their extremely short interaction length (~atomic scale) and relatively large absorption coefficient (e.g.,>5×10^7 m^-1 in the visible range for graphene and MoS2 after thickness normalization). In this context, I will construct atomically-engineered heterojunctions based 2DSs to significantly enhance the nonlinear optical responses of 2D materials by coherently increasing light-matter interaction length and efficiently creating fundamentally new physical properties (e.g., reducing optical loss and increasing nonlinear susceptibilities). The concrete project objectives are to theoretically calculate, experimentally fabricate and study optical nonlinearities of 2DSs for next-generation nonlinear photonics at the nanoscale. More specifically, I will use 2DSs as new building blocks to develop three of the most disruptive nonlinear photonic devices: (1) on-chip optical parametric generation sources; (2) broadband Terahertz sources; (3) high-purity photon-pair emitters. These devices will lead to a breakthrough technology to enable highly-integrated, high-efficient and wideband lab-on-chip photonic systems with unprecedented performance in system size, power consumption, flexibility and reliability, ideally fitting numerous growing and emerging applications, e.g. metrology, portable sensing/imaging, and quantum-communications. Based on my proven track record and my pioneering work on 2D materials based photonics and optoelectronics, I believe I will accomplish this ambitious frontier research program with a strong interdisciplinary nature.
The project aims at introducing a paradigm shift in the development of nonlinear photonics with atomically-engineered two-dimensional (2D) van der Waals superlattices (2DSs). Monolayer 2D materials have large optical nonlinear susceptibilities, a few orders of magnitude larger than typical traditional bulk materials. However, nonlinear frequency conversion efficiency of monolayer 2D materials is typically weak mainly due to their extremely short interaction length (~atomic scale) and relatively large absorption coefficient (e.g.,>5×10^7 m^-1 in the visible range for graphene and MoS2 after thickness normalization). In this context, I will construct atomically-engineered heterojunctions based 2DSs to significantly enhance the nonlinear optical responses of 2D materials by coherently increasing light-matter interaction length and efficiently creating fundamentally new physical properties (e.g., reducing optical loss and increasing nonlinear susceptibilities). The concrete project objectives are to theoretically calculate, experimentally fabricate and study optical nonlinearities of 2DSs for next-generation nonlinear photonics at the nanoscale. More specifically, I will use 2DSs as new building blocks to develop three of the most disruptive nonlinear photonic devices: (1) on-chip optical parametric generation sources; (2) broadband Terahertz sources; (3) high-purity photon-pair emitters. These devices will lead to a breakthrough technology to enable highly-integrated, high-efficient and wideband lab-on-chip photonic systems with unprecedented performance in system size, power consumption, flexibility and reliability, ideally fitting numerous growing and emerging applications, e.g. metrology, portable sensing/imaging, and quantum-communications. Based on my proven track record and my pioneering work on 2D materials based photonics and optoelectronics, I believe I will accomplish this ambitious frontier research program with a strong interdisciplinary nature.
Project acronym EMERGE
Project Reconstructing the emergence of the Milky Way's stellar population with Gaia, SDSS-V and JWST
Researcher (PI) Dan Maoz
Summary Understanding how the Milky Way arrived at its present state requires a large volume of precision measurements of our Galaxy's current makeup, as well as an empirically based understanding of the main processes involved in the Galaxy's evolution. Such data are now about to arrive in the flood of quality information from Gaia and SDSS-V. The demography of the stars and of the compact stellar remnants in our Galaxy, in terms of phase-space location, mass, age, metallicity, and multiplicity are data products that will come directly from these surveys. I propose to integrate this information into a comprehensive picture of the Milky Way's present state. In parallel, I will build a Galactic chemical evolution model, with input parameters that are as empirically based as possible, that will reproduce and explain the observations. To get those input parameters, I will measure the rates of supernovae (SNe) in nearby galaxies (using data from past and ongoing surveys) and in high-redshift proto-clusters (by conducting a SN search with JWST), to bring into sharp focus the element yields of SNe and the distribution of delay times (the DTD) between star formation and SN explosion. These empirically determined SN metal-production parameters will be used to find the observationally based reconstruction of the Galaxy's stellar formation history and chemical evolution that reproduces the observed present-day Milky Way stellar population. The population census of stellar multiplicity with Gaia+SDSS-V, and particularly of short-orbit compact-object binaries, will hark back to the rates and the element yields of the various types of SNe, revealing the connections between various progenitor systems, their explosions, and their rates. The plan, while ambitious, is feasible, thanks to the data from these truly game-changing observational projects. My team will perform all steps of the analysis and will combine the results to obtain the clearest picture of how our Galaxy came to be.
Understanding how the Milky Way arrived at its present state requires a large volume of precision measurements of our Galaxy's current makeup, as well as an empirically based understanding of the main processes involved in the Galaxy's evolution. Such data are now about to arrive in the flood of quality information from Gaia and SDSS-V. The demography of the stars and of the compact stellar remnants in our Galaxy, in terms of phase-space location, mass, age, metallicity, and multiplicity are data products that will come directly from these surveys. I propose to integrate this information into a comprehensive picture of the Milky Way's present state. In parallel, I will build a Galactic chemical evolution model, with input parameters that are as empirically based as possible, that will reproduce and explain the observations. To get those input parameters, I will measure the rates of supernovae (SNe) in nearby galaxies (using data from past and ongoing surveys) and in high-redshift proto-clusters (by conducting a SN search with JWST), to bring into sharp focus the element yields of SNe and the distribution of delay times (the DTD) between star formation and SN explosion. These empirically determined SN metal-production parameters will be used to find the observationally based reconstruction of the Galaxy's stellar formation history and chemical evolution that reproduces the observed present-day Milky Way stellar population. The population census of stellar multiplicity with Gaia+SDSS-V, and particularly of short-orbit compact-object binaries, will hark back to the rates and the element yields of the various types of SNe, revealing the connections between various progenitor systems, their explosions, and their rates. The plan, while ambitious, is feasible, thanks to the data from these truly game-changing observational projects. My team will perform all steps of the analysis and will combine the results to obtain the clearest picture of how our Galaxy came to be.
Project acronym HomDyn
Project Homogenous dynamics, arithmetic and equidistribution
Researcher (PI) Elon Lindenstrauss
Summary We consider the dynamics of actions on homogeneous spaces of algebraic groups, and propose to tackle a wide range of problems in the area, including the central open problems. One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$. A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term. One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics. Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
We consider the dynamics of actions on homogeneous spaces of algebraic groups, and propose to tackle a wide range of problems in the area, including the central open problems. One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$. A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term. One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics. Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
Project acronym NanoProt-ID
Project Proteome profiling using plasmonic nanopore sensors
Researcher (PI) Amit MELLER
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Summary To date, antibody-free protein identification methods have not reached single-molecule precision. Instead, they rely on averaging from many cells, obscuring the details of important biological processes. The ability to identify each individual protein from within a single cell would transform proteomics research and biomedicine. However, single protein identification (ID) presents a major challenge, necessitating a breakthrough in single-molecule sensing technologies. We propose to develop a method for proteome-level analysis, with single protein resolution. Bioinformatics studies show that >99% of human proteins can be uniquely identified by the order in which only three amino-acids, Lysine, Cysteine, and Methionine (K, C and M, respectively), appear along the proteins' chain. By specifically labelling K, C and M residues with three distinct fluorophores, and threading them, one by one, through solid-state nanopores equipped with custom plasmonic amplifiers, we hypothesize that we can obtain multi-color fluorescence time-trace fingerprints uniquely representing most proteins in the human proteome. The feasibility of our method will be established by attaining 4 main aims: i) in vitro K,C,M protein labelling, ii) development of a machine learning classifier to uniquely ID proteins based on their optical fingerprints, iii) fabrication of state-of-the-art plasmonic nanopores for high-resolution optical sensing of proteins, and iv) devising methods for regulating the translocation speed to enhance the signal to noise ratio. Next, we will scale up our platform to enable the analysis of thousands of different proteins in minutes, and apply it to sense blood-secreted proteins, as well as whole proteomes in pre- and post-metastatic cancer cells. NanoProt-ID constitutes the first and most challenging step towards the proteomic analysis of individual cells, opening vast research directions and applications in biomedicine and systems biology.
To date, antibody-free protein identification methods have not reached single-molecule precision. Instead, they rely on averaging from many cells, obscuring the details of important biological processes. The ability to identify each individual protein from within a single cell would transform proteomics research and biomedicine. However, single protein identification (ID) presents a major challenge, necessitating a breakthrough in single-molecule sensing technologies. We propose to develop a method for proteome-level analysis, with single protein resolution. Bioinformatics studies show that >99% of human proteins can be uniquely identified by the order in which only three amino-acids, Lysine, Cysteine, and Methionine (K, C and M, respectively), appear along the proteins' chain. By specifically labelling K, C and M residues with three distinct fluorophores, and threading them, one by one, through solid-state nanopores equipped with custom plasmonic amplifiers, we hypothesize that we can obtain multi-color fluorescence time-trace fingerprints uniquely representing most proteins in the human proteome. The feasibility of our method will be established by attaining 4 main aims: i) in vitro K,C,M protein labelling, ii) development of a machine learning classifier to uniquely ID proteins based on their optical fingerprints, iii) fabrication of state-of-the-art plasmonic nanopores for high-resolution optical sensing of proteins, and iv) devising methods for regulating the translocation speed to enhance the signal to noise ratio. Next, we will scale up our platform to enable the analysis of thousands of different proteins in minutes, and apply it to sense blood-secreted proteins, as well as whole proteomes in pre- and post-metastatic cancer cells. NanoProt-ID constitutes the first and most challenging step towards the proteomic analysis of individual cells, opening vast research directions and applications in biomedicine and systems biology.
Project acronym NeuroCompSkill
Project A neuro-computational account of success and failure in acquiring communication skills
Researcher (PI) Merav Ahissar
Summary Why do most people acquire expertise with practice whereas others fail to master the same tasks? NeuroCompSkill offers a neuro-computational framework that explains failure in acquiring verbal and non-verbal communication skills. It focuses on individual ability of using task-relevant regularities, postulating that efficient use of such regularities is crucial for acquiring expertise. Specifically, it proposes that using stable temporal regularities, acquired across long time windows (> 3 sec to days) is crucial for the formation of linguistic (phonological, morphological and orthographic) skills. In contrast, fast updating of recent events (within ~ .3- 3 sec), is crucial for the formation of predictions in interactive, social communication. Based on this, I propose that individuals with difficulties in retaining regularities will have difficulties in verbal communication, whereas individuals with difficulties in fast updating will have difficulties in social non-verbal communications. Five inter-related work packages (WP) will test the predictions that: (WP1) behaviourally – individuals with language and reading difficulties will have impoverished categorical representations, whereas individuals with non-verbal difficulties will be slow in adapting to changed statistics. (WP2) developmentally – poor detection of relevant regularities will be an early marker of related difficulties. (WP3) computationally – profiles of impaired inference will match the predicted time window. (WP4) neuronally – dynamics of neural adaptation will match the dynamics of behavioural inference. (WP5) structurally – different brain structures will be associated with the different time windows of inference. NeuroCompSkill is ground-breaking in proposing a unifying, theory based, testable principle, which explains core difficulties in two prevalent developmental communication disorders. Its 5 WPs will lay the foundations of a comprehensive approach to failure in skill acquisition.
Why do most people acquire expertise with practice whereas others fail to master the same tasks? NeuroCompSkill offers a neuro-computational framework that explains failure in acquiring verbal and non-verbal communication skills. It focuses on individual ability of using task-relevant regularities, postulating that efficient use of such regularities is crucial for acquiring expertise. Specifically, it proposes that using stable temporal regularities, acquired across long time windows (> 3 sec to days) is crucial for the formation of linguistic (phonological, morphological and orthographic) skills. In contrast, fast updating of recent events (within ~ .3- 3 sec), is crucial for the formation of predictions in interactive, social communication. Based on this, I propose that individuals with difficulties in retaining regularities will have difficulties in verbal communication, whereas individuals with difficulties in fast updating will have difficulties in social non-verbal communications. Five inter-related work packages (WP) will test the predictions that: (WP1) behaviourally – individuals with language and reading difficulties will have impoverished categorical representations, whereas individuals with non-verbal difficulties will be slow in adapting to changed statistics. (WP2) developmentally – poor detection of relevant regularities will be an early marker of related difficulties. (WP3) computationally – profiles of impaired inference will match the predicted time window. (WP4) neuronally – dynamics of neural adaptation will match the dynamics of behavioural inference. (WP5) structurally – different brain structures will be associated with the different time windows of inference. NeuroCompSkill is ground-breaking in proposing a unifying, theory based, testable principle, which explains core difficulties in two prevalent developmental communication disorders. Its 5 WPs will lay the foundations of a comprehensive approach to failure in skill acquisition.
Project acronym PCPABF
Project Challenging Computational Infeasibility: PCP and Boolean functions
Researcher (PI) Shmuel Avraham Safra
Summary Computer Science, in particular, Analysis of Algorithms and Computational-Complexity theory, classify algorithmic-problems into feasible ones and those that cannot be efficiently-solved. Many fundamental problems were shown NP-hard, therefore, unless P=NP, they are infeasible. Consequently, research efforts shifted towards approximation algorithms, which find close-to-optimal solutions for NP-hard optimization problems. The PCP Theorem and its application to infeasibility of approximation establish that, unless P=NP, there are no efficient approximation algorithms for numerous classical problems; research that won the authors --the PI included-- the 2001 Godel prize. To show infeasibility of approximation of some fundamental problems, however, a stronger PCP was postulated in 2002, namely, Khot's Unique-Games Conjecture. It has transformed our understanding of optimization problems, provoked new tools in order to refute it and motivating new sophisticated techniques aimed at proving it. Recently Khot, Minzer (a student of the PI) and the PI proved a related conjecture: the 2-to-2-Games conjecture (our paper just won Best Paper award at FOCS'18). In light of that progress, recognized by the community as half the distance towards the Unique-Games conjecture, resolving the Unique-Games conjecture seems much more likely. A field that plays a crucial role in this progress is Analysis of Boolean-functions. For the recent breakthrough we had to dive deep into expansion properties of the Grassmann-graph. The insight was subsequently applied to achieve much awaited progress on fundamental properties of the Johnson-graph. With the emergence of cloud-computing, cryptocurrency, public-ledger and Blockchain technologies, the PCP methodology has found new and exciting applications. This framework governs SNARKs, which is a new, emerging technology, and the ZCASH technology on top of Blockchain. This is a thriving research area, but also an extremely vibrant High-Tech sector.
Computer Science, in particular, Analysis of Algorithms and Computational-Complexity theory, classify algorithmic-problems into feasible ones and those that cannot be efficiently-solved. Many fundamental problems were shown NP-hard, therefore, unless P=NP, they are infeasible. Consequently, research efforts shifted towards approximation algorithms, which find close-to-optimal solutions for NP-hard optimization problems. The PCP Theorem and its application to infeasibility of approximation establish that, unless P=NP, there are no efficient approximation algorithms for numerous classical problems; research that won the authors --the PI included-- the 2001 Godel prize. To show infeasibility of approximation of some fundamental problems, however, a stronger PCP was postulated in 2002, namely, Khot's Unique-Games Conjecture. It has transformed our understanding of optimization problems, provoked new tools in order to refute it and motivating new sophisticated techniques aimed at proving it. Recently Khot, Minzer (a student of the PI) and the PI proved a related conjecture: the 2-to-2-Games conjecture (our paper just won Best Paper award at FOCS'18). In light of that progress, recognized by the community as half the distance towards the Unique-Games conjecture, resolving the Unique-Games conjecture seems much more likely. A field that plays a crucial role in this progress is Analysis of Boolean-functions. For the recent breakthrough we had to dive deep into expansion properties of the Grassmann-graph. The insight was subsequently applied to achieve much awaited progress on fundamental properties of the Johnson-graph. With the emergence of cloud-computing, cryptocurrency, public-ledger and Blockchain technologies, the PCP methodology has found new and exciting applications. This framework governs SNARKs, which is a new, emerging technology, and the ZCASH technology on top of Blockchain. This is a thriving research area, but also an extremely vibrant High-Tech sector.
Project acronym QUAMAP
Project Quasiconformal Methods in Analysis and Applications
Researcher (PI) Kari ASTALA
Summary The use of delicate quasiconformal methods, in conjunction with convex integration and/or nonlinear Fourier analysis, will be the common theme of the proposal. A number of important outstanding problems are susceptible to attack via these methods. First and foremost, Morrey's fundamental question in two dimensional vectorial calculus of variations will be considered as well as the related conjecture of Iwaniec regarding the sharp $L^p$ bounds for the Beurling transform. Understanding the geometry of conformally invariant random structures will be one of the central goals of the proposal. Uhlmann's conjecture regarding the optimal regularity for uniqueness in Calder\'on's inverse conductivity problem will also be considered, as well as the applications to imaging. Further goals are to be found in fluid mechanics and scattering, as well as the fundamental properties of quasiconformal mappings, interesting in their own right, such as the outstanding deformation problem for chord-arc curves.
The use of delicate quasiconformal methods, in conjunction with convex integration and/or nonlinear Fourier analysis, will be the common theme of the proposal. A number of important outstanding problems are susceptible to attack via these methods. First and foremost, Morrey's fundamental question in two dimensional vectorial calculus of variations will be considered as well as the related conjecture of Iwaniec regarding the sharp $L^p$ bounds for the Beurling transform. Understanding the geometry of conformally invariant random structures will be one of the central goals of the proposal. Uhlmann's conjecture regarding the optimal regularity for uniqueness in Calder\'on's inverse conductivity problem will also be considered, as well as the applications to imaging. Further goals are to be found in fluid mechanics and scattering, as well as the fundamental properties of quasiconformal mappings, interesting in their own right, such as the outstanding deformation problem for chord-arc curves.
Project acronym RegRNA
Project Mechanistic principles of regulation by small RNAs
Researcher (PI) Hanah Margalit
Summary Small RNAs (sRNAs) are major regulators of gene expression in bacteria, exerting their regulation in trans by base pairing with target RNAs. Traditionally, sRNAs were considered post-transcriptional regulators, mainly regulating translation by blocking or exposing the ribosome binding site. However, accumulating evidence suggest that sRNAs can exploit the base pairing to manipulate their targets in different ways, assisting or interfering with various molecular processes involving the target RNA. Currently there are a few examples of these alternative regulation modes, but their extent and implications in the cellular circuitry have not been assessed. Here we propose to take advantage of the power of RNA-seq-based technologies to develop innovative approaches to address these challenges transcriptome-wide. These approaches will enable us to map the regulatory mechanism a sRNA employs per target through its effect on a certain molecular process. For feasibility we propose studying three processes: RNA cleavage by RNase E, pre-mature Rho-dependent transcription termination, and transcription elongation pausing. Finding targets regulated by sRNA manipulation of the two latter processes would be especially intriguing, as it would suggest that sRNAs can function as gene-specific transcription regulators (alluded to by our preliminary results). As a basis of our research we will use the network of ~2400 sRNA-target pairs in Escherichia coli, deciphered by RIL-seq (a method we recently developed for global in vivo detection of sRNA targets). Revealing the regulatory mechanism(s) employed per target will shed light on the principles underlying the integration of distinct sRNA regulation modes in specific regulatory circuits and cellular contexts, with direct implications to synthetic biology and pathogenic bacteria. Our study may change the way sRNAs are perceived, from post-transcriptional to versatile regulators that apply different regulation modes to different targets.
Small RNAs (sRNAs) are major regulators of gene expression in bacteria, exerting their regulation in trans by base pairing with target RNAs. Traditionally, sRNAs were considered post-transcriptional regulators, mainly regulating translation by blocking or exposing the ribosome binding site. However, accumulating evidence suggest that sRNAs can exploit the base pairing to manipulate their targets in different ways, assisting or interfering with various molecular processes involving the target RNA. Currently there are a few examples of these alternative regulation modes, but their extent and implications in the cellular circuitry have not been assessed. Here we propose to take advantage of the power of RNA-seq-based technologies to develop innovative approaches to address these challenges transcriptome-wide. These approaches will enable us to map the regulatory mechanism a sRNA employs per target through its effect on a certain molecular process. For feasibility we propose studying three processes: RNA cleavage by RNase E, pre-mature Rho-dependent transcription termination, and transcription elongation pausing. Finding targets regulated by sRNA manipulation of the two latter processes would be especially intriguing, as it would suggest that sRNAs can function as gene-specific transcription regulators (alluded to by our preliminary results). As a basis of our research we will use the network of ~2400 sRNA-target pairs in Escherichia coli, deciphered by RIL-seq (a method we recently developed for global in vivo detection of sRNA targets). Revealing the regulatory mechanism(s) employed per target will shed light on the principles underlying the integration of distinct sRNA regulation modes in specific regulatory circuits and cellular contexts, with direct implications to synthetic biology and pathogenic bacteria. Our study may change the way sRNAs are perceived, from post-transcriptional to versatile regulators that apply different regulation modes to different targets.
Project acronym SensStabComp
Project Sensitivity, Stability, and Computation
Researcher (PI) Gil KALAI
Host Institution (HI) INTERDISCIPLINARY CENTER (IDC) HERZLIYA
Summary Noise sensitivity and noise stability of Boolean functions, percolation, and other models were introduced in a paper by Benjamini, Kalai, and Schramm (1999) and were extensively studied in the last two decades. We propose to extend this study to various stochastic and combinatorial models, and to explore connections with computer science, quantum information, voting methods and other areas. The first goal of our proposed project is to push the mathematical theory of noise stability and noise sensitivity forward for various models in probabilistic combinatorics and statistical physics. A main mathematical tool, going back to Kahn, Kalai, and Linial (1988), is applications of (high-dimensional) Fourier methods, and our second goal is to extend and develop these discrete Fourier methods. Our third goal is to find applications toward central old-standing problems in combinatorics, probability and the theory of computing. The fourth goal of our project is to further develop the ``argument against quantum computers'' which is based on the insight that noisy intermediate scale quantum computing is noise stable. This follows the work of Kalai and Kindler (2014) for the case of noisy non-interacting bosons. The fifth goal of our proposal is to enrich our mathematical understanding and to apply it, by studying connections of the theory with various areas of theoretical computer science, and with the theory of social choice.
Noise sensitivity and noise stability of Boolean functions, percolation, and other models were introduced in a paper by Benjamini, Kalai, and Schramm (1999) and were extensively studied in the last two decades. We propose to extend this study to various stochastic and combinatorial models, and to explore connections with computer science, quantum information, voting methods and other areas. The first goal of our proposed project is to push the mathematical theory of noise stability and noise sensitivity forward for various models in probabilistic combinatorics and statistical physics. A main mathematical tool, going back to Kahn, Kalai, and Linial (1988), is applications of (high-dimensional) Fourier methods, and our second goal is to extend and develop these discrete Fourier methods. Our third goal is to find applications toward central old-standing problems in combinatorics, probability and the theory of computing. The fourth goal of our project is to further develop the ``argument against quantum computers'' which is based on the insight that noisy intermediate scale quantum computing is noise stable. This follows the work of Kalai and Kindler (2014) for the case of noisy non-interacting bosons. The fifth goal of our proposal is to enrich our mathematical understanding and to apply it, by studying connections of the theory with various areas of theoretical computer science, and with the theory of social choice.
Project acronym SynProAtCell
Project Delivery and On-Demand Activation of Chemically Synthesized and Uniquely Modified Proteins in Living Cells
Researcher (PI) Ashraf BRIK
Summary While advanced molecular biology approaches provide insight on the role of proteins in cellular processes, their ability to freely modify proteins and control their functions when desired is limited, hindering the achievement of a detailed understanding of the cellular functions of numerous proteins. At the same time, chemical synthesis of proteins allows for unlimited protein design, enabling the preparation of unique protein analogues that are otherwise difficult or impossible to obtain. However, effective methods to introduce these designed proteins into cells are for the most part limited to simple systems. To monitor proteins cellular functions and fates in real time, and in order to answer currently unanswerable fundamental questions about the cellular roles of proteins, the fields of protein synthesis and cellular protein manipulation must be bridged by significant advances in methods for protein delivery and real-time activation. Here, we propose to develop a general approach for enabling considerably more detailed in-cell study of uniquely modified proteins by preparing proteins having the following features: 1) traceless cell delivery unit(s), 2) an activation unit for on-demand activation of protein function in the cell, and 3) a fluorescence probe for monitoring the state and the fate of the protein. We will adopt this approach to shed light on the processes of ubiquitination and deubiquitination, which are critical cellular signals for many biological processes. We will employ our approach to study 1) the effect of inhibition of deubiquitinases in cancer. 2) Examining effect of phosphorylation on proteasomal degradation and on ubiquitin chain elongation. 3) Examining effect of covalent attachment of a known ligase ligand to a target protein on its degradation Moreover, which could trigger the development of new methods to modify the desired protein in cell by selective chemistries and so rationally promote their degradation.
While advanced molecular biology approaches provide insight on the role of proteins in cellular processes, their ability to freely modify proteins and control their functions when desired is limited, hindering the achievement of a detailed understanding of the cellular functions of numerous proteins. At the same time, chemical synthesis of proteins allows for unlimited protein design, enabling the preparation of unique protein analogues that are otherwise difficult or impossible to obtain. However, effective methods to introduce these designed proteins into cells are for the most part limited to simple systems. To monitor proteins cellular functions and fates in real time, and in order to answer currently unanswerable fundamental questions about the cellular roles of proteins, the fields of protein synthesis and cellular protein manipulation must be bridged by significant advances in methods for protein delivery and real-time activation. Here, we propose to develop a general approach for enabling considerably more detailed in-cell study of uniquely modified proteins by preparing proteins having the following features: 1) traceless cell delivery unit(s), 2) an activation unit for on-demand activation of protein function in the cell, and 3) a fluorescence probe for monitoring the state and the fate of the protein. We will adopt this approach to shed light on the processes of ubiquitination and deubiquitination, which are critical cellular signals for many biological processes. We will employ our approach to study 1) the effect of inhibition of deubiquitinases in cancer. 2) Examining effect of phosphorylation on proteasomal degradation and on ubiquitin chain elongation. 3) Examining effect of covalent attachment of a known ligase ligand to a target protein on its degradation Moreover, which could trigger the development of new methods to modify the desired protein in cell by selective chemistries and so rationally promote their degradation.
Project acronym ZARAH
Project Women's labour activism in Eastern Europe and transnationally, from the age of empires to the late 20th century
Researcher (PI) Susan Carin Zimmermann
Host Institution (HI) KOZEP-EUROPAI EGYETEM
Summary ZARAH explores the history of women's labour activism and organizing to improve labour conditions and life circumstances of lower and working class women and their communities—moving these women from the margins of labour, gender, and European history to the centre of historical study. ZARAH's research rationale is rooted in the interest in the interaction of gender, class, and other dimensions of difference (e.g. ethnicity and religion) as forces that shaped women's activism. It addresses the gender bias in labour history, the class bias in gender history, and the regional bias in European history. ZARAH conceives of women's labour activism as emerging from the confluence of local, nation-wide, border-crossing and international initiatives, interactions and networking. It studies this activism in the Austro-Hungarian and Ottoman Empires, the post-imperial nation states, and during the Cold War and the years thereafter. Employing a long-term and trans-regional perspective, ZARAH highlights how a history of numerous social upheavals, and changing borders and political systems shaped the agency of the women studied, and examines their contribution to the struggle for socio-economic inclusion and the making of gender-, labour-, and social policies. ZARAH comprises, in addition to the PI, an international group of nine post-doctoral and doctoral researchers at CEU, distinguished by their excellent command of the history and languages of the region. Research rationale, research questions, and methodological framework were developed through an intensive exploratory research phase (2016–2017). ZARAH is a pioneering project that consists of a web of component and collaborative studies, which include all relevant groups of activists and activisms, span the whole region, and cover the period between the 1880s and the 1990s. It will generate key research resources that are available to all students and scholars, and will set the stage for research for a long time to come.
ZARAH explores the history of women's labour activism and organizing to improve labour conditions and life circumstances of lower and working class women and their communities—moving these women from the margins of labour, gender, and European history to the centre of historical study. ZARAH's research rationale is rooted in the interest in the interaction of gender, class, and other dimensions of difference (e.g. ethnicity and religion) as forces that shaped women's activism. It addresses the gender bias in labour history, the class bias in gender history, and the regional bias in European history. ZARAH conceives of women's labour activism as emerging from the confluence of local, nation-wide, border-crossing and international initiatives, interactions and networking. It studies this activism in the Austro-Hungarian and Ottoman Empires, the post-imperial nation states, and during the Cold War and the years thereafter. Employing a long-term and trans-regional perspective, ZARAH highlights how a history of numerous social upheavals, and changing borders and political systems shaped the agency of the women studied, and examines their contribution to the struggle for socio-economic inclusion and the making of gender-, labour-, and social policies. ZARAH comprises, in addition to the PI, an international group of nine post-doctoral and doctoral researchers at CEU, distinguished by their excellent command of the history and languages of the region. Research rationale, research questions, and methodological framework were developed through an intensive exploratory research phase (2016–2017). ZARAH is a pioneering project that consists of a web of component and collaborative studies, which include all relevant groups of activists and activisms, span the whole region, and cover the period between the 1880s and the 1990s. It will generate key research resources that are available to all students and scholars, and will set the stage for research for a long time to come. | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{Density of Schr\"odinger Weyl-Titchmarsh m functions on Herglotz functions}
\author{Injo Hur}
\address{Mathematics Department, University of Oklahoma, Norman, OK, USA, 73019\\
and Mathematics Department, Sogang University, Seoul, Republic of Korea, 04107}
\begin{abstract} We show that the Herglotz functions that arise as Weyl-Titchmarsh $m$ functions of one-dimensional Schr\"odinger operators are dense in the space of all Herglotz functions with respect to uniform convergence on compact subsets of the upper half plane. This result is obtained as an application of de Branges theory of canonical systems. \end{abstract}
\begin{keyword}
Canonical system \sep Herglotz function \sep Schr\"odinger operator \sep Weyl-Titchmarsh $m$ function
\MSC[2010] Primary 34L40 34A55 \sep Secondary 81Q10
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{secintro}
We discuss the very natural question of whether any arbitrary Herglotz function can be approximated by Weyl-Titchmarsh $m$ functions \cite{Titch,Weyl} of Schr\"odinger operators $S=-d^2/dx^2+V(x)$. These $m$ functions, called \textit{Schr\"odinger $m$ functions}, have several descriptions which are essentially obtained by the inverse spectral theories of these operators \cite{Borg,GL, GS2,Hor,Lev,Mar1,Mar2,RSim,RemAf,RemdB,SimIST}. Informally, these results reveal that a Herglotz function is a Schr\"odinger $m$ function precisely if it has the right large asymptotics.
It is not clear, however, how to express precise conditions along these lines. The descriptions in these theories have some Gelfand-Levitan type conditions which are usually stated in terms of a Fourier-Laplace type transform of the spectral measure. All this means that the problem above seems to lead to some difficult issues about how to enforce certain asymptotic behavior on approximating functions, with additionally not knowing what exactly we are trying to enforce.
Our goal in this paper is to present a transparent way to avoid these difficulties and, therefore, answer the problem positively. The crucial machinery for the new path is de Branges theory of canonical systems \cite{Ach,deB1,deB2,deB3,deB4,deB,Dym,HSW,KL,RemdB,Sakh,Win,Win2}, which enables us to rephrase the problem as one about approximations of canonical systems.
More precisely, there are two well known facts in the theory; the first is the one-to-one correspondence between Herglotz functions and (trace-normed) canonical systems in \cite{deB,Win}, and the second is the fact that trace-normed canonical systems are always in a limit point case at $\infty$ in \cite{Ach,deB2}. The latter is, especially, used to cook up a topology on canonical systems which interacts very well with Herglotz functions.
The last piece of information for the route is the characterization of some special canonical systems. They are called \textit{Schr\"odinger canonical systems}, since they are some disguised Schr\"odinger equations such that each of them and its corresponding Schr\"odinger equation share their $m$ functions. This means that it is unnecessary to check if the $m$ functions associated with Schr\"odinger canonical systems are Schr\"odinger $m$ functions. They already are. This characterization, moreover, shows how to construct them. Besides de Branges theory, it is another key to deal with canonical systems, not with their $m$ functions.
Based on the three ingredients above let us present our method for the problem in terms of canonical systems. See also the box below. For given a Herglotz function, by de Branges theory, there exists a unique trace-normed canonical system whose $m$ function is the given Herglotz function. It is then approximated by tailor-made Schr\"odinger canonical systems in the sense of the topology on canonical systems working well with Herglotz functions. This implies that their Schr\"odinger $m$ functions converge to the given Herglotz function with respect to uniform convergence on compact subsets of the upper half plane.
\begin{framed} \begin{center} \small{ $\begin{CD} \textbf{Herglotz functions} @<\textrm{\large{?}}<< \textbf{Schr\"odinger $m$ functions}\\ @V\textrm{\underline{de Branges}}V\textrm{\underline{theory}}V @AAA\\ \textbf{Canonical systems} @<\textrm{Convergence in}<\textrm{canonical systems}< \textbf{Schr\"odinger canonical systems} \end{CD}$ } \end{center} \end{framed}
We would like to enlighten two more things. The main result can become stronger since it will be shown that all Weyl-Titchmarsh $m$ functions corresponding to Schr\"odinger operators with both \textit{smooth} potentials and some \textit{fixed} boundary condition at 0 are dense in the space of all Herglotz functions. This will be discussed more clearly in Sections \ref{secScs} and, essentially, \ref{secpf}. The other is that the procedure above is so general that it has a lot of potential to give ideas about unsolved questions of Schr\"odinger operators in the viewpoint of (inverse) spectral theory.\\
This paper is organized as follows. The following section provides basic materials about Schr\"odinger operators, canonical systems and their $m$ functions. In Section \ref{secDensity} the main result is stated with several comments. We then, in Section \ref{secScs}, characterize all Schr\"odinger canonical systems, that is, all canonical systems which can be written as Schr\"odinger equations. As the last preparation, a topology on canonical systems is made up in Section \ref{sectop}. However, not to lose our main theme, several continuous properties are verified in Appendix A. The stronger result is finally proven in Section \ref{secpf} that all Schr\"odinger $m$ functions with a fixed boundary condition at 0 and smooth potentials are dense in the space of the Herglotz functions. \\
\section{Preliminaries}\label{secpre} \subsection{Schr\"odinger operators and their $m$ functions} Let us start with one-dimensional Schr\"odinger operators \begin{equation}\label{Schop} S=-\frac{d^2}{dx^2}+V(x) \end{equation} on $L^2(0,b)$, where $0<b<\infty$ or $b=\infty$, and $V$ are real-valued locally integrable functions, called potentials. Schr\"odinger eigenvalue equations associated with (\ref{Schop}) is \begin{equation} \label{Se} -y''(x,z)+V(x)y(x,z)=zy(x,z), \quad x\in(0,b) \end{equation} where $z$ is a spectral parameter. It is then well known that each operator (\ref{Schop}), or equivalently each equation (\ref{Se}), with boundary condition(s) at 0 and possibly at $b$ has a unique Weyl-Titchmarsh $m$ function and vice versa \cite{Borg,Mar1,Titch,Weyl}.
More precisely, put a boundary condition at 0, \begin{equation} \label{bcat0} y(0) \cos\alpha-y'(0) \sin\alpha=0 \end{equation} where $\alpha \in [0,\pi)$. For $0<b<\infty$, we place another boundary condition at $b$, \begin{equation} \label{bcatb} y(b) \cos\beta+y'(b) \sin\beta=0 \end{equation} with another real number $\beta$ in $[0,\pi)$. Note that $\beta$ is used as a parameter for (\ref{bcatb}). When $b=\infty$, Weyl theory says that, if (\ref{Schop}) is in a limit point case at $\infty$, no more boundary condition except (\ref{bcat0}) is needed. However, if (\ref{Schop}) is in a limit circle case at $\infty$, that is, every solution of (\ref{Se}) is in $L^2(0,\infty)$, then it is necessary to have a limit type boundary condition at $\infty$ as follows: Put $f(x,z):=u_{\alpha}(x,z)+m(z)v_{\alpha}(x,z)$, where $u_{\alpha}$ and $v_{\alpha}$ are the solutions to (\ref{Se}) satisfying the initial conditions, $u_{\alpha}(0,z)=v_{\alpha}'(0,z)=\cos\alpha$ and $-u_{\alpha}'(0,z)=v_{\alpha}(0,z)=\sin\alpha$. Then $m(z)$ is on the limit circle if and only if \begin{equation} \label{bcatblcc} \lim_{N\to\infty} W_N(\bar{f}, f)=0 \end{equation} where $W_N$ is the Wronskian at N, that is, $W_N(f,g)=f(N)g'(N)-f'(N)g(N)$ and $\bar{f}$ is the complex conjugate of $f$. Similar to the case when $0<b<\infty$, $\beta$ is made use of as a parameter for these boundary conditions at $\infty$. See \cite{CodLev,Weid} for more details.
Then (\ref{Schop}) with (\ref{bcat0}) and possibly either (\ref{bcatb}) or (\ref{bcatblcc}) has a unique $m$ function $m^S_{\alpha, \beta}$ and it can be expressed by \begin{equation} \label{mfnforSe} m^S_{0, \beta}(z)=\frac{\tilde{y}'(0,z)}{\tilde{y}(0,z)}\quad \textrm{or} \quad m^S_{\alpha, \beta}(z)=\begin{pmatrix} \cos\alpha & \sin\alpha \\ -\sin\alpha & \cos\alpha \end{pmatrix} \cdot m^S_{0, \beta}(z) \end{equation} where $\tilde{y}$ is a solution to (\ref{Se}) which is square-integrable near $\infty$ when (\ref{Schop}) is in a limit point case at $b=\infty$, or which is satisfying either (\ref{bcatb}) when $0<b<\infty$ or (\ref{bcatblcc}) when (\ref{Se}) is in a limit circle case at $b=\infty$. Here $\cdot$ means the action of a 2$\times$2 matrix as a linear fractional transformation (which will be reviewed soon). For convenience $m_{\alpha,\beta}^S$ are called Schr\"odinger $m$ functions, as talked. They are \textit{Herglotz functions}, that is, they map the upper half plane ${\mathbb C}^+$ holomorphically to itself. See e.g. \cite{LS} for all these properties of $m_{\alpha,\beta}^S$.
Before going further, let us recall the action of linear fractional transformations, based on \cite{Remweyl}. A \textit{linear fractional transformation} is a map of the form \begin{equation*} z\mapsto \frac{az+b}{cz+d} \end{equation*} with $a,b,c,d\in{\mathbb C}$, $ad-bc\neq 1$. This can be expressed very easily via matrix notation by \[ A\cdot z=\frac{az+b}{cz+d}, \quad A=\begin{pmatrix} a&b \\ c&d \end{pmatrix}. \] This notation has a natural interpretation: Identify $z\in{\mathbb C}\subset \mathbb{CP}^1$ with its homogeneous coordinates $z=[z:1]$ and apply the matrix $A$ to the vector $(\begin{smallmatrix} z\\1 \end{smallmatrix})$ whose components are these homogeneous coordinates. The image vector $A(\begin{smallmatrix} z\\1 \end{smallmatrix})$ then reveals what the homogeneous coordinates of the image of $z$ under the linear fractional transformation are.
These remarks also show that the mapping \[ A\mapsto \textrm{linear fractional transformation} \] is a group homomorphism between the general linear group $GL(2,{\mathbb C})$ and the non-constant linear fractional transformations, which implies that $\cdot$ can be thought of as the action of linear fractional transformations. The homomorphism property will be used in Section \ref{secScs}. Let us also mention that in (\ref{mfnforSe}) the special orthogonal group $SO(2,{\mathbb R})$ is only considered among $GL(2,{\mathbb C})$.\\
Even though Schr\"odinger $m$ functions are Herglotz functions, the converse is not true. To verify this let us see that, because of the Herglotz representation, not all Herglotz functions can have the asymptotic behavior which Schr\"odinger $m$ functions should do. Indeed, Everitt \cite{Eve} showed that, when $z\in{\mathbb C}^+$ is large enough, $m_{\alpha,\beta}^S$ satisfy the asymptotic behavior \begin{equation} \label{asymm1} m_{0,\beta}^S(z)=i\sqrt{z}+o(1) \end{equation} for $\alpha=0$, or \begin{equation} \label{asymm2} m_{\alpha,\beta}^S(z)= \frac{\cos\alpha}{\sin\alpha}+\frac1{\sin^2 \alpha}\frac{i}{\sqrt{z}}
+O \big( |z|^{-1} \big) \end{equation} for $\alpha\in (0,\pi)$. See also \cite{Atk,Har} for more developed versions of the asymptotic behavior of $m_{\alpha,\beta}^S$.
Given a Herglotz function $F$, it can be expressed by \begin{equation} \label{Herglotz} F(z)=A+\int_{{\mathbb R}_{\infty}} \frac{1+tz}{t-z} d\rho(t) \end{equation} where $A$ is a real number and $d\rho$ is a finite positive Borel measure on ${\mathbb R}_{\infty}$, the one-point compactification of the set of all real numbers ${\mathbb R}$. (See e.g. (2.1) in \cite{Remac}.) Then (\ref{Herglotz}) indicates that any Herglotz function with a measure $d\rho$ having a positive point mass at $\infty$ cannot satisfy (\ref{asymm1}) nor (\ref{asymm2}), and therefore it is not a Schr\"odinger $m$ function. However, for $d\rho$ to be a measure associated with (\ref{Schop}) (or so called spectral measure), a more issue is on the asymptotic behavior of $d\rho$ near $\infty$. See two sections 17 and 19 of \cite{RemdB} for details.\\
\subsection{Canonical systems and de Branges theory} To see a general connection between Herglotz functions and differential equations let us consider a half-line canonical system, \begin{equation} \label{cs} Ju'(x,z)=zH(x)u(x,z), \quad x\in(0,\infty) \end{equation}
where $H$ is a positive semidefinite $2\times2$ matrix whose entries are real-valued, locally integrable functions and $J=\big( \begin{smallmatrix} 0 & -1 \\ 1 &0 \end{smallmatrix}$\big). A canonical system (\ref{cs}) is called \textit{trace-normed} if $\textrm{Tr }H(x)=1$ for almost all $x$ in $(0,\infty)$. For (\ref{cs}) we always place a boundary condition at 0, \begin{equation} \label{bcat0forcs} u_1(0,z)=0 \end{equation} where $u_1$ is the first component of $u=\big( \begin{smallmatrix} u_1 \\ u_2\end{smallmatrix}$\big). Similar to (\ref{mfnforSe}), its $m$ function, $m_H$, can be expressed by \begin{equation} \label{mfnforcs} m_H(z)=\frac{\tilde{u}_2(0)}{\tilde{u}_1(0)} \end{equation} where $\tilde{u}=\big( \begin{smallmatrix} \tilde{u}_1 \\ \tilde{u}_2 \end {smallmatrix} \big) $ is a solution to (\ref{cs}) satisfying \begin{equation} \label{H-int} \int_0^{\infty} \tilde{u}^*(x)H(x)\tilde{u}(x)dx<\infty. \end{equation} Here $^*$ means the Hermitian adjoint. Such a solution satisfying (\ref{H-int}) is called \textit{$H$-integrable}. See \cite{Win,Win2} for all these properties of (\ref{cs}).
Recall that there were three cases when defining Schr\"odinger $m$ functions and in each case we needed a special solution to formulate the corresponding $m$ function. For (\ref{cs}) an $H$-integrable solution, however, is only needed, since (\ref{cs}) is half-line and a half-line trace-normed canonical system is always in a limit point case at $\infty$. In other words, there is only one $H$-integrable solution up to a multiplicative constant. See the original argument by \cite{deB2} or an alternative proof in \cite{Ach} for more details.\\
De Branges \cite{deB} and Winkler \cite{Win} then showed that, for a given Herglotz function, there exists a unique half-line trace-normed canonical system with (\ref{bcat0forcs}), such that its $m$ function $m_H$ is the given Herglotz function. This one-to-one correspondence is essential later in order to cope with canonical systems rather than Herglotz functions or their $m$ functions.\\
\section{Main result}\label{secDensity} In this paper, we show the density of Schr\"odinger $m$ functions on all Herglotz functions or, equivalently, all $m$ functions $m_H$ to (\ref{cs}) in the sense of their natural topology as analytic functions on ${\mathbb C}^+$ by the following theorem.
\begin{Theorem}\label{Density} The space of Schr\"odinger $m$ functions with some fixed boundary condition at 0 is dense in the space of all Herglotz functions. \end{Theorem}
The above theorem is stronger than what we just said, since, as the statement itself says, all Schr\"odinger $m$ functions \textit{with any fixed boundary condition at 0} are dense in all Herglotz functions. The result can, moreover, be stronger with such Schr\"odinger $m$ functions corresponding to only \textit{smooth} potentials, which will be clear in Section \ref{secpf} after proving. As its application, Schr\"odinger $m$ functions with the Dirichlet boundary condition at 0, $m^S_{0,\beta}$, corresponding to smooth potentials are dense in all Herglotz functions.
Due to Theorem \ref{Density} it cannot be expected that Schr\"odinger operators with some fixed boundary condition at 0 converge to some Schr\"odinger operator in the sense that this convergence is equivalent to the uniform convergence of their $m$ functions on compact subsets of ${\mathbb C}^+$. This is one of reasons why only subclasses of Schr\"odinger operators are considered in many applications to make them compact.\\
Remark. It seems very difficult to show Theorem \ref{Density} through $m$ functions directly, even though the measures corresponding to (\ref{Schop}), called Schr\"odinger spectral measures, are dense in the space of all measures in the Herglotz representation with respect to weak-$\ast$ convergence. In (\ref{Herglotz}) we can see that the uniform convergence of Herglotz functions on compact subsets of ${\mathbb C}^+$ is equivalent to both the weak-$*$ convergence of the measures $d\rho$ and the pointwise convergence of the constants $A$. In particular, the weak-$*$ convergence of measures (without the convergence of constants) is not sufficient for the convergence of Herglotz functions.
It turns out that any finite positive Borel measure on ${\mathbb R}_{\infty}$ can be approximated by Schr\"odinger spectral measures in the weak-$*$ sense. Indeed, for any finite positive Borel measure $d\rho$ on ${\mathbb R}_{\infty}$, construct a sequence of measures $d\rho_n$ by \begin{equation*} d\rho_n(t)=\chi_{(-n,n)}(t) d\rho(t)+\chi_{{\mathbb R} \setminus (-n,n)}(t) d\rho_{free}(t)+\rho\{\infty\}\delta_n(t) \end{equation*} where $\delta_n$ is a Dirac measure at $n$ and $d\rho_{free}$ is the spectral measure for (\ref{Schop}) with $V\equiv 0$. In other words, this is a sequence of truncated measures having the tail of $d\rho_{free}$ and putting the point mass at $n$ with the weight $\rho\{\infty\}$, which implies that $d\rho_n$ are Schr\"odinger spectral measures. Then $d\rho_n\to d\rho$ in the weak-$*$ sense, as $n\to\infty$. The weak-$*$ convergence of spectral measures $d\rho_n$, however, does not imply the convergence of $m$ functions associated with $d\rho_n$ in (\ref{Herglotz}). This is because any Schr\"odinger spectral measures determine their $m$ functions; the error term in (\ref{asymm1}) or (\ref{asymm2}) is at least $o(1)$, which means that spectral measures decide the corresponding constants in (\ref{Herglotz}). Therefore it is unclear if these constants converge to some constant, and even worse we cannot see if they will converge to the constant corresponding to a given Herglotz function.\\
\section{Schr\"odinger canonical systems}\label{secScs} It is well known that Schr\"odinger equations can be expressed by some canonical systems (which will be shown later) but the converse is not true. This is the reason why canonical systems are thought of as generalizations of Schr\"odinger equations. For us it is, however, necessary to learn which canonical systems admit Schr\"odinger $m$ functions as their $m$ functions. In this section, let us figure out all the conditions for such canonical systems, called Schr\"odinger canonical systems as before.
\begin{Proposition} \label{Cor} A Schr\"odinger equation (\ref{Se}) with boundary conditions (\ref{bcat0}) and, if necessary, either (\ref{bcatb}) or (\ref{bcatblcc}) can be expressed as the following canonical system such that both have the same Weyl-Titchmarsh $m$ functions: \begin{equation}\label{Sc} J \frac{d}{dt}u(t,z)=z P_{\varphi}(t) u(t,z) , \quad t\in(0,\infty) \end{equation} with \begin{equation} \label{HforS} P_{\varphi}(t):=\begin{pmatrix} \cos^2\varphi(t) & \cos \varphi(t) \sin \varphi(t) \\ \cos \varphi(t) \sin \varphi(t) & \sin^2\varphi(t) \end{pmatrix}. \end{equation} Here a new variable $t$ is defined by \begin{equation} \label{t} t(x)=\int_0^x \big( u_{0}^2(s)+v_{0}^2(s) \big) \textrm{ }ds \end{equation} where $u_{0}$ and $v_{0}$ are the solutions to the given Schr\"odinger equation for $z=0$ with $u_0(0)=v'_0(0)=\cos\alpha$ and $-u'_0(0)=v_0(0)=\sin\alpha$. Put $t_b:= \lim_{x\uparrow b}t(x)$ in $(0,\infty]$. Then $\varphi$ is a strictly increasing function on $(0,t_b)$, which has a locally integrable third derivative on $(0,t_b)$, satisfying three initial conditions \begin{equation}\label{initialcondition}
\varphi(0)=\alpha,\quad \frac{d\varphi}{dt}(0)=1, \textit{ and }\quad \frac{d^2\varphi}{dt^2}(0)=0. \end{equation} If $t_b<\infty$, then $\varphi(t)=\tilde{\beta}$ on $(t_b, \infty)$ for some real number $\tilde{\beta}\in [0,\pi)$.
Conversely, any canonical system (\ref{Sc}) with all the properties of $\varphi$ above can be written as (\ref{Se}) with some locally integrable potential $V$ such that they have the same $m$ function.\\ \end{Proposition}
In short, the proposition reveals that Schr\"odinger equations are exactly the canonical systems with projection matrices $P_{\varphi}$ as their $H$ in (\ref{cs}), such that $\varphi$ are strictly increasing functions on $(0,t_b)$ having the third derivatives which have the same regularity with potentials $V$, and they behave as linear functions with slope 1 near 0. If $t_b<\infty$, $\varphi$ are constant on $(t_b,\infty)$. Moreover, their function values at 0, $\varphi(0)$, are the same as $\alpha$ in (\ref{bcat0}) up to multiples of $\pi$.
Note that, to have the condition $\varphi(0)=\alpha$ precisely, we assume that $\varphi(0)$ are in $[0,\pi)$. This is fine because all entries in $P_{\varphi}$ are periodic with the period $\pi$, which means that $\varphi$ can be shifted by multiples of $\pi$ at our disposal.
The reason why obtaining the one-to-one correspondence between (\ref{Se}) and (\ref{Sc}) is that their $m$ functions are the same. Without this restriction, it is possible to connect (\ref{Se}) to infinitely many different trace-normed canonical systems which, of course, take different $m$ functions from each other.
It is well known how to convert (\ref{Se}) to (\ref{cs}) (see e.g. \cite{Achtw} or \cite{RemdB}) and the converse of Proposition \ref{Cor} is a kind of reformulation of proposition 8.1 in \cite{RemdB} in terms of trace-normed canonical systems. It is also realized that a very similar form to (\ref{Sc}) was discussed in \cite{L&W} and \cite{W&W} to deal with semibounded canonical systems. However, (\ref{Sc}) with (\ref{HforS}) is a specific form which exactly fits to Schr\"odinger canonical systems. It is also efficacious, since $\varphi$ can be thought of as a spectral data. For example, since the third derivative of $\varphi$ and $V$ have the same regularity and (\ref{Sc}) can be well defined with \textit{measurable} $\varphi$, singular potentials may be treated by dropping the regularity of $\varphi$.
The reason to have projection matrices $P_{\varphi}$ is the asymptotic behavior of solutions to (\ref{Se}). Indeed, de Branges, Krein and Langer \cite{deB2,KL} showed that solutions to (\ref{cs}) belong to Cartwright class of the exponential type $h$ with \begin{equation*} h=\int_0^x \sqrt{\textrm{det }H(t)}dt \end{equation*} for fixed $x$. An entire function $F$ belongs to \textit {Cartwright class of the exponential type $h$} if \begin{equation*}
h:=\limsup_{|z|\to\infty}\frac{\textrm{ln }|F(z)|}{|z|} \textrm{ is finite}, \end{equation*} and \begin{equation*}
\int_{-\infty}^{\infty}\frac{ | \textrm{ln } |F(x)| |}{1+x^2}<\infty. \end{equation*} P\"oschel and Trubowitz \cite{P&T} then showed that solutions to (\ref{Se}) are of order 1/2 as entire functions with respect to $z$ for fixed $x$. See also (4.3) in \cite{RemdB}. In particular, they are of exponential type 0 and so are the solutions to (\ref{cs}) associated with them (by (\ref{connection}) below), which implies that $\textrm{det }H=0$. Since $H$ are symmetric, the two conditions, $\textrm{Tr }H(x)=1$ and $\textrm{det }H(x)=0$ for almost all $x$, indicate that $H$ should be projection matrices $P_{\varphi}$ after some change of variables.\\
Let us now verify Proposition \ref{Cor}. \begin{proof}[Proof of Proposition \ref{Cor}] Let $y$ be a solution to a given Schr\"odinger equation (\ref{Se}). Define $u=u(x,z)=(u_1(x,z), u_2(x,z))^t$ by \begin{equation} \label{connection}
\begin{pmatrix} u_1 \\ u_2 \end{pmatrix}:= \begin{pmatrix} u_0(x) & v_0(x) \\ u_0'(x) & v_0'(x) \end{pmatrix}^{-1}\begin{pmatrix} y \\ y' \end{pmatrix}. \end{equation} Note that this is well defined, since the determinant of the $2\times 2$ matrix in (\ref{connection}) is the Wronskian of $u_0$ and $v_0$ at $x$, $W_x(u_0,v_0)$, which is 1 for all $x$; in particular, this matrix is invertible. Then $u$ solves (\ref{cs}) with \begin{equation*} H_0(x):= \begin{pmatrix} u_0^2(x) & u_0(x)v_0(x) \\ u_0(x)v_0(x) & v_0^2(x) \end{pmatrix}. \end{equation*} This is shown by direct computation, which is left to readers.
The matrix $H_0(x)$ may not be trace-normed, and it should be changed to a trace-normed matrix in order to apply de Branges theory. For this we do a change of variable as follows; define $R$ and $\varphi$ through \begin{equation} \label{defofR}
u_0(x)+iv_0(x):=R(x) (\cos\varphi(x)+i\sin\varphi(x)) \end{equation}
and a new variable $t$ by (\ref{t}). Then $u(t,z)$ solves (\ref{Sc}) with (\ref{HforS}), but only on $(0,t_b)$.\\
Let us investigate all the conditions of $\varphi$ in the proposition. Direct computation with (\ref{defofR}) shows the key relation \begin{equation} \label{Wron}
(1=) \textrm{ } W(u_0,v_0)|_{x}=R^2(x)\varphi'(x), \quad x\in(0,b), \end{equation} which tells us that $\varphi$ is strictly increasing on $(0,b)$ with respect to $x$. Due to three equalities $u_0''=Vu_0$, $v_0''=Vv_0$ and $u^2_0+v^2_0=R^2$, the functions $V$, $u_0''$, $v_0''$, $R''$ and $\varphi'''$ are locally integrable. See also (\ref{formulaforV}) below. These are because of two following facts; the derivatives of solutions to (\ref{Se}) are absolutely continuous when $V$ is locally integrable, and two linearly independent solutions cannot be zero at the same time. The initial values of $u_0$ and $v_0$ can also be transformed to the conditions $\varphi(0)=\alpha$, $R(0)=1$, and $R'(0)=0$, or equivalently, $\varphi(0)=\alpha$, $\varphi'(0)=1$, and $\varphi''(0)=0$ by direct computation. Note that $\varphi$ is normalized by the condition $\varphi(0)\in[0,\pi)$, as mentioned.
So far all the conditions of $\varphi$ have been verified with respect to $x$. Now that $dt/dx=u^2_0+v^2_0=R^2$ and $R$ cannot be zero, these conditions can be converted to the ones with the new variable $t$. In other words, it can be shown that, as a function of $t$, $\varphi$ is a strictly increasing function on $(0,t_b)$ satisfying (\ref{initialcondition}) whose third derivative is locally integrable. The details are left to readers.\\
Note that $t_b<\infty$ precisely if $b<\infty$ or (\ref{Se}) is in a limit circle case at $b=\infty$, since all solutions to (\ref{Se}) are in $L^2(0,b)$ in these two cases. When $t_b<\infty$, it is not difficult to see that the boundary condition (\ref{bcatb}) or (\ref{bcatblcc}) can be converted to a similar one \begin{equation}\label{bcforcs} u_1(t_b) \cos(\tilde{\beta})+u_2(t_b) \sin(\tilde{\beta})=0 \end{equation} for $u$ with another number $\tilde{\beta}\in [0,\pi)$. To obtain a \textit{half-line} canonical system we change (\ref{bcforcs}) to a singular interval $(t_b, \infty)$ of type $\tilde{\beta}$, in other words, $H=P_{\tilde{\beta}}$ on this interval, where $P_{\tilde{\beta}}$ is (\ref{HforS}) with $\tilde{\beta}$ instead of $\varphi(t)$.
Observe that, for all $t\in[t_b,\infty)$, \begin{equation}\label{trivialext} u(t)=u(t_b) \end{equation} and \begin{equation}\label{te} u^*(t)P_{\tilde{\beta}}\textrm{ }u(t)=0. \end{equation} Indeed, if $u$ satisfies (\ref{bcforcs}), $P_{\tilde{\beta}}\textrm{ }u(t_b)$ is the $2\times 1$ zero matrix. With this, since $I_{2}-zLJP_{\tilde{\beta}}$ is the transfer matrix on $(t_b,\infty)$ for a nonnegative number L (see e.g. section 10 in \cite{RemdB}), where $I_2$ is the $2\times2$ identity matrix, we have that \begin{equation*} u(t_b+L)=(I_2-zLJP_{\tilde{\beta}})u(t_b)=u(t_b), \end{equation*} which implies (\ref{trivialext}) and (\ref{te}). All this means that, if a solution $u$ satisfies (\ref{bcforcs}), then it can be trivially extended to $[t_b,\infty)$ such that \begin{equation}\label{extofu} \int_{t_b}^{\infty} u^*(t)P_{\tilde{\beta}}u(t) dt=0. \end{equation}\\
So far we have constructed (\ref{Sc}) with (\ref{HforS}) as their $H$. It remains to show that both (\ref{Se}) and (\ref{Sc}) have the same $m$ function. To see this let us compare their solutions which were used to define their $m$ functions. Recall $\tilde{y}$ in (\ref{mfnforSe}), that is, $\tilde{y}$ is a solution to (\ref{Se}) which is square-integrable near $b=\infty$ or satisfying either (\ref{bcatb}) or (\ref{bcatblcc}), and let $\tilde{u}$ be the solution to (\ref{Sc}) corresponding to $\tilde{y}$ through (\ref{connection}). Then either $\tilde{u}$ or its trivial extension (again denoted by $\tilde{u}$) through (\ref{trivialext}) is $H$-integrable with $H=P_{\varphi}$. More clearly, since $\tilde{y} \in L^2[0,b)$, that is, \begin{equation*} \int_0^{b} \tilde{y}(x)^{*}\tilde{y}(x) dx<\infty, \end{equation*} by (\ref{connection}), the above $L^2$-condition is equivalent to the $H_0$-integrability only on $(0,b)$, i.e., \begin{equation*} \int_0^{b} \tilde{u}(x)^*H_0(x) \tilde{u}(x) dx<\infty. \end{equation*}
The change of variable to the new variable $t$ then gives us the condition with respect to $t$ \begin{equation*} \int_0^{t_b} \tilde{u}(t)^*P_{\varphi}(t) \tilde{u}(t) dt<\infty. \end{equation*} If $t_b=\infty$, that is, (\ref{Se}) is in a limit point case at $\infty$, then $\tilde{u}$ is trivially $P_{\varphi}$-integrable. When $t_b<\infty$, the trivial extension of $\tilde{u}$ is $P_{\varphi}$-integrable due to (\ref{extofu}). In other words, since $\tilde{u}$ satisfies (\ref{bcforcs}), its trivial extension $\tilde{u}$ by (\ref{trivialext}) to $(0,\infty)$ is $P_{\varphi}$-integrable.
Let us now compare their $m$ functions. Recall that $m^S_{\alpha, \beta}$ are the $m$ functions of (\ref{Se}) with (\ref{bcat0}) and, if necessary, either (\ref{bcatb}) or (\ref{bcatblcc}), and $m_H$ are the $m$ functions of (\ref{Sc}) with $H=P_{\varphi}$. By (\ref{mfnforSe}), (\ref{mfnforcs}) and (\ref{connection}), we then see that \begin{eqnarray*} m_H(z)=m_{P_{\varphi}}(z) &=& \frac{\tilde{u}_2(0)}{\tilde{u}_1(0)}\\ &=& \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \cdot \frac{\tilde{u}_1(0)}{\tilde{u}_2(0)}\\ &=& \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} u_0(0) & v_0(0) \\ u_0'(0) & v_0'(0) \end{pmatrix}^{-1} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \cdot \frac{\tilde{y}'(0,z)}{\tilde{y}(0,z)}\\ &=& \begin{pmatrix} \cos\alpha & \sin\alpha \\ -\sin\alpha & \cos\alpha \end{pmatrix} \cdot \frac{\tilde{y}'(0,z)}{\tilde{y}(0,z)}\\ &=& m^S_{\alpha, \beta}(z) \end{eqnarray*} where $\cdot$ is the action of a 2$\times$2 matrix as a linear fractional transformation, which was reviewed in Section \ref{secpre}. Therefore (\ref{Sc}) has been constructed from (\ref{Se}), as desired.\\
For the converse, let us go through the previous process, but in reverse. Assume that (\ref{Sc}) with (\ref{HforS}) be given such that $\varphi$ has all the properties in Proposition \ref{Cor}. If $\varphi$ is constant on a unbounded interval $(c,\infty)$ for some number $c$, it is possible to put a suitable boundary condition at $c$, which is similar to (\ref{bcforcs}). When $\varphi$ is strictly increasing on $(0,\infty)$, put $c=\infty$, which implies that the corresponding Schr\"odinger operator (\ref{Schop}) will be in a limit point case at $\infty$.
Since $\frac{d\varphi}{dt}>0$ on $(0,c)$, let us recognize a variable $x$ by \begin{equation*} x(t)=\int_0^t \left[ \frac{d}{dt}\varphi(s) \right]^{1/2} ds \end{equation*} on $(0,x_c)$, where $x_c:=\lim_{t\uparrow c} x(t)$. By putting $R(x):= \left[ \frac{d}{dt} \varphi(t(x)) \right] ^{-1/4}$ we can see that $t'(x)=R^2(x)$ and $R^2(x)\varphi'(x)=1$ (here $'$ means $\frac{d}{dx}$). As expected, let's define $u_0$ and $v_0$ by $u_0(x)=R(x) \cos\varphi(x)$ and $v_{0}(x)= R(x)\sin \varphi(x)$. Direct computation then shows that $u$ satisfies (\ref{cs}) with $H_0(x)$ and $R^2(x)\varphi'(x)=W_x(u_0,v_0)$ for all $x$. The nonzero constant Wronskian condition, moreover, allows us to define $y$ through (\ref{connection}). Then $y$ satisfies (\ref{Se}) with the potential $V$ \begin{equation} \label{formulaforV} V=\frac7{16}\frac{(\varphi''(x))^2}{(\varphi'(x))^3}-\frac14\frac{\varphi'''(x)}{(\varphi'(x))^2}-\varphi'(x)\quad \Big(=\frac{R''}{R}-\frac1{R^4} \Big) \end{equation} by direct computation, which is left to readers.
Similar to the previous argument, it is possible to compare their solutions and then to show that their $m$ functions are the same. Proposition \ref{Cor} now has been proven, as desired.\\ \end{proof}
\section{Topology on canonical systems}\label{sectop} The last preparation is to construct a topology on the set of trace-normed canonical systems (\ref{cs}), which interacts well with the convergence on their $m$ functions. Let ${\mathbb V}_+$ be the set of the matrices $H$ on trace-normed canonical systems, that is, \begin{equation*} {\mathbb V}_+=\{ H \textrm{ in } (\ref{cs}): \textrm{ Tr }H(x)= 1 \textrm{ for almost all } x\in (0,\infty) \}. \end{equation*} Recall that $H$ is a positive semidefinite $2\times2$ matrix whose entries are real-valued, locally integrable functions. Let us say that \textit{$H_n$ converges to $H$ weak-$*$}, if \begin{equation} \label{weak*conv} \int_0^{\infty}f^*H_nf\to\int_0^{\infty}f^*Hf \end{equation} for all continuous functions $f=(f_1,f_2)^t$ with compact support of $[0,\infty)$, as $n\to\infty$. Observe that, for such a given function $f$, two convergences \begin{equation*} \int_0^{\infty}f^*H_nf\to\int_0^{\infty}f^*Hf \quad \textrm{and } \int_0^{\infty}H_nf\to\int_0^{\infty}Hf \end{equation*} are equivalent.\\
By the similar argument in section 2 of \cite{Remcont}, it is briefly shown that ${\mathbb V}_+$ is a compact metric space. First proceed to define a metric on ${\mathbb V}_+$: pick a countable dense (with respect to $|| \cdot ||_{\infty}$) subset $\{ f_n: n\in{\mathbb N} \}$, continuous functions of compact support, and put \begin{equation*}
d_n(H_1, H_2):= \Big| \int_{(0,\infty)} f_n^*(x) (H_1-H_2)(x) f_n(x)\textrm{ } dx \textrm{ } \Big|. \end{equation*} Then define a metric $d$ as \begin{equation*} d(H_1, H_2):=\sum_{n=1}^{\infty} 2^{-n} \frac{d_n(H_1, H_2)}{1+d_n(H_1, H_2)}. \end{equation*} Clearly, $d(H_n,H)\to 0$ if and only if $H_n$ converges to $H$ weak-$*$, as $n\to\infty$. To show that $({\mathbb V}_+,d)$ is compact, let us choose a sequence $H_n$ in ${\mathbb V}_+$. By the Banach-Alaoglu Theorem (on finite intervals $[0,L]$ for some positive numbers $L$) and a diagonal process (for the half line $[0,\infty)$) it is possible to find a subsequence $H_{n_j}$ with the property that the measures $H_{n_j}(t)dt$ converge to some matrix-valued measure $d\mu$ in the weak-$*$ sense. The proof can now be completed by noting that the trace-normed condition, $\textrm{ Tr }H(x)= 1$, is preserved in the limiting process, which implies that the limit measure $d\mu$ is absolutely continuous with respect to the Lebesgue measure and it can be expressed by $H(t)dt$ for some $H$ in ${\mathbb V}_+$.\\
The topology on ${\mathbb V}_+$ above works fine with $m$ functions by the following proposition. \begin{Proposition}\label{CovofH} The map from ${\mathbb V}_+$ to $\overline{\mathbb H}$, defined by $H\mapsto m_H$, is a homeomorphism, where $\mathbb H$ is the set of all (genuine) Herglotz functions and $\overline{\mathbb H}=\mathbb H \cup{\mathbb R}\cup \{ \infty \}$. \end{Proposition} It is well known that $\overline{\mathbb H}$ is compact with the uniform convergence on compact subsets of ${\mathbb C}^+$ which is a natural topology for Herglotz functions as analytic functions on ${\mathbb C}^+$. As discussed in Section \ref{secpre}, this map is a bijection by de Branges \cite{deB} and Winkler \cite{Win}. Since ${\mathbb V}_+$ is compact, it suffices to show that this map is continuous. Roughly speaking, this map should be continuous because of Weyl theory and the fact that $\textrm{Tr }H=1$ implies that (\ref{cs}) is in a limit point case at $\infty$. In other words, $H$ on $(0,L)$ for a sufficiently large number $L>0$ almost determines its $m$ function $m_H$.
Not to miss a major picture, the proof of Proposition \ref{CovofH} is postponed to Appendix A, since it is quite long and the comment above is reasonable.\\
Let us also mention that an equivalent convergence for $H$ to (\ref{weak*conv}) was discussed in \cite{deB2}. More precisely, de Branges showed that, if $n\to\infty$, the convergence $m_{H_n}(z)\to m_{H}(z)$ holds locally uniformly on $C^+$ if and only if \begin{equation}\label{lucforH} \int_0^x H_n(t) dt \to \int_0^x H(t)dt \quad \textrm{locally uniformly for } x\in [0,\infty) \end{equation} (see also proposition 3.2 in \cite{L&W}). In de Branges' version, $H_n$ do not need to be trace-normed, but the local uniform convergence is required as payment. Note that, due to the trace-normed condition, the weak-$\ast$ convergence in (\ref{weak*conv}) implies the local uniform convergence in (\ref{lucforH}), which reveals that two convergences are equivalent. In this paper, (\ref{weak*conv}) will be only discussed when proving Proposition \ref{CovofH}.\\
\section{Proof of Theorem \ref{Density}}\label{secpf} In this section let us prove Theorem \ref{Density}. Based on the discussion (or the box) in the introduction, almost all the pieces have already been in our hands from the previous sections. The remaining is to construct Schr\"odinger canonical systems which converge to the trace-normed canonical system whose $m$ function is a given Herglotz function in the sense of the topology on canonical systems in the previous section.
To do this, observe that, since any symmetric matrix can be expressed by the sum of projections by the spectral theorem, it is at least locally and averagely that $H$ is the sum of projection matrices. In other words, $H$ is $P_{\varphi}$ (which is (\ref{HforS})) with a nondecreasing step function $\varphi$ in the locally average sense (which will be clear). Then approximate such $\varphi$ by strictly increasing smooth functions in the $L^1$-sense. Due to this $L^1$-approximation $\alpha$ can be chosen in (\ref{bcat0}) at our disposal.
\begin{proof}[Proof of Theorem \ref{Density}] Choose any function from $\overline{\mathbb H}$. By de Branges \cite{deB2} and Winkler \cite{Win}, there is a unique matrix $H$ in ${\mathbb V}_+$ such that the corresponding $m$ function $m_H$ is the given Herglotz function.\\
Let us first approximate $H$ by $H_n$ such that they are projection matrices $P_{\varphi_n}$ whose $\varphi_n$ are nondecreasing step functions. Given $n\in{\mathbb N}$ put $I_{j,n}:=[\frac{j}{2^n},\frac{j+1}{2^n})$ and $H_{j,n}:=2^n\int_{I_{j,n}}H(x)\textrm{ }dx$, where $j=0,1,2,\cdots$. Then $H_{j,n}$ are constant, positive semidefinite $2\times2$ matrices with $\textrm{Tr }H_{j,n}=1$. Since $H_{j,n}$ are symmetric, by the spectral theorem, there are some real numbers $\varphi_{j,n}$ such that \begin{equation*} H_{j,n}=\lambda_{j,n}P_{\varphi_{j,n}}+(1-\lambda_{j,n})P_{\varphi_{j,n}+\frac{\pi}2} \end{equation*} where $\lambda_{j,n}$ are eigenvalues of $H_{j,n}$ and $P_{\varphi_{j,n}}$ are orthogonal projections onto the eigenspaces for $\lambda_{j,n}$. The projections for the other eigenvalues are $P_{\varphi_{j,n}+\frac{\pi}2}$ because of the orthogonality of eigenspaces of two eigenvalues. If there is only one eigenvalue $\lambda_{j,n}$, it is possible to choose two orthogonal vectors in its eigenspace, since its multiplicity is two.\\
Construct $\varphi_n$ by \begin{equation*} \varphi_n(x) := \begin{cases} \varphi_{j,n} & \quad x\in [\frac{j}{2^n}, \frac{j+\lambda_{j,n}}{2^n}) \\ \varphi_{j,n}+\frac{\pi}2 & \quad x\in [\frac{j+\lambda_{j,n}}{2^n},\frac{j+1}{2^n}) \end{cases} \end{equation*} in such a way that $\varphi_{j+1,n}\ge \varphi_{j,n}+\frac{\pi}2$ for all $j$. Indeed, if $\varphi_{j+1,n}< \varphi_{j,n}+\frac{\pi}2$ for some $j$, then add the smallest multiple of $\pi$ to $\varphi_{j+1,n}$ in order to make $\varphi_n$ nondecreasing. Do this process from $j=0$ inductively, and denote new values by $\varphi_{j+1,n}$ again for convenience. This is fine because we later deal with only three quantities $\cos^2\varphi_n$, $\sin^2\varphi_n$ and $\cos\varphi_n\sin\varphi_n$ in (\ref{HforS}) which are periodic with the period $\pi$. Then $\varphi_n$ are nondecreasing step functions. See Figure 1 below.\\
\begin{tikzpicture}[scale=0.65] \draw[->] (0,0) -- (8.5,0) node[anchor=north] { \large{$x$}}; \draw (0,0) node[anchor=north] {0}
(1.9,0) node[anchor=north] {$\frac{\lambda_{0,n}}{2^n}$}
(3.5,0) node[anchor=north] {$\frac1{2^n}$}
(5.5,0) node[anchor=north] {$\frac{1+\lambda_{1,n}}{2^n}$}
(7,0) node[anchor=north] {$\frac2{2^n}$}
(0,1.3) node[anchor=east] {$\varphi_{0,n}$}
(0,2.3) node[anchor=east] {$\varphi_{0,n}$+$\frac{\pi}2$}
(0,3.5) node[anchor=east] {$\varphi_{1,n}$}
(0,4.5) node[anchor=east] {$\varphi_{1,n}$+$\frac{\pi}2$}; \draw[->] (0,0) -- (0,6) node[anchor=east] {\large{$\varphi_n$}}; \draw[dotted] (1.9,0) -- (1.9,6)
(3.5,0) -- (3.5,6)
(5.5,0) -- (5.5,6)
(7,0) -- (7,6); \draw[very thick] (0,1.3) -- (1.9,1.3)
(1.9,2.3) -- (3.5,2.3)
(3.5,3.5) -- (5.5,3.5)
(5.5,4.5) -- (7,4.5)
(7,5.7) -- (7.9,5.7);
\draw
(-0.03,2.3) -- (0.04,2.3)
(-0.03,3.5) -- (0.04,3.5)
(-0.03,4.5) -- (0.04,4.5); \draw (4.2,-.9) node[anchor=north] {\textbf{Figure 1. Step functions $\varphi_n$}}; \end{tikzpicture}
\noindent Put $H_n:=P_{\varphi_n}$. The definition of $\varphi_n$ then indicates that, for given $n_0\in{\mathbb N}$, \begin{equation} \label{sameaverage} \int_{I_{j,n_0}}H_{n}=\int_{I_{j,n_0}}H \end{equation} for all $n\ge n_0$.
We next prove that $H_n$ converges weak-$\ast$ to $H$ in the sense of (\ref{weak*conv}). Let $f$ be a continuous (vector-valued) function with support contained in $[0,L]$ for some positive number $L$. By the Lebesgue lemma, for given $\epsilon>0$, there are numbers $M$, $n_0$ and $f_{j,n_0}$ such that, for all $x\in[0,L]$, \begin{equation*}
\sup \| f(x) \| \leq M \end{equation*}
and, for all $n\ge n_0$, \begin{equation*}
\sup \| f(x)-f_{j,n_0} \| \leq \frac{\epsilon}{2ML} \quad \textrm{ for all } x,y\in I_{j,n}. \end{equation*} Let us estimate the contribution of $H_n-H$ on each small interval. First decompose it by \begin{eqnarray*} \int_{I_{j,n_0}}f^*(H_n-H) f &=& \int_{I_{j,n_0}}(f-f_{j,n_0})^* (H_n-H) f \\
&+ & \int_{I_{j,n_0}}f_{j,n_0}^* (H_n-H) (f-f_{j,n_0}) \\
&+& \int_{I_{j,n_0}}f_{j,n_0}^*(H_n-H) f_{j,n_0}. \end{eqnarray*}
By (\ref{sameaverage}) the third integral is zero for all $n\ge n_0$. Now that the operator norm of $H_n-H$ is bounded by 2 (each $H$ is bounded by 1 due to $\textrm{Tr }H=1$ and positive semidefiniteness of $H$), the absolute values of the first and second integrals are bounded by $\frac{\epsilon}{L2^{n_0}}$. Hence the quantity $\big| \int_0^\infty f^*(H_n-H)f \big|$ is less than $2\epsilon$. Since $\epsilon$ is arbitrary, $H_n$ converges to $H$ weak-$\ast$.\\
We have so far constructed nondecreasing step functions $\varphi_n$ such that $H_n$ ($=P_{\varphi_n}$) converges to $H$ weak-$*$. These $\varphi_n$ are, however, not the ones corresponding to some Schr\"odinger equations, since $\varphi_n$ are not differentiable, not linear with slope 1 near $0$, and not strictly increasing.
To overcome these, for each $n$ let us construct new functions $\tilde{\varphi}_{m,n}$ in the following way. For convenience the subscript $n$ is dropped, and so $\varphi_n$ and $\tilde{\varphi}_{m,n}$ are denoted by $\varphi$ and $\tilde{\varphi}_m$, respectively. Assume that all the steps of the graph of $\varphi$ are bounded. This is OK because if an unbounded step exists, then it would be the last step and it can be considered as a singular interval. This singular interval can then be converted to some boundary condition at the starting point of the unbounded interval, as talked in the proof of Proposition \ref{Cor}.
Except for the first step, all other steps of $\varphi$ are approximated by piecewise linear, strictly increasing, and continuous functions $\tilde{\varphi}_m$, so that $\tilde{\varphi}_m$ converges to $\varphi$ in the $L^1$-sense. Look at the thick piecewise linear function in Figure 2 below. For the first (bounded) step, assume that $\varphi(0) > \alpha$, where $\alpha$ is chosen at our disposal in (\ref{bcat0}). This is possible because of shifting $\varphi$ up by $\pi$ at no costs. Then make $\tilde{\varphi}_m$ start at $(0, \alpha)$ and slightly move up linearly with slope 1. Do the similar procedure for its remaining part of the first step as the other steps. See the Figure 2 below. Then $\tilde{\varphi}_m$ are strictly increasing, piecewise linear, continuous functions with $\tilde{\varphi}_m(0)=\alpha$ which look linear with slope 1 near 0. The problem is, however, that they may not be differentiable.\\
\begin{tikzpicture}[scale=0.65] \draw[->] (0,0) -- (8.5,0) node[anchor=north] { \large{$x$}}; \draw (0,0) node[anchor=north] {0}
(1.9,0) node[anchor=north] {$\frac{\lambda_{0,n}}{2^n}$}
(3.5,0) node[anchor=north] {$\frac1{2^n}$}
(5.5,0) node[anchor=north] {$\frac{1+\lambda_{1,n}}{2^n}$}
(7,0) node[anchor=north] {$\frac2{2^n}$}
(0,0.5) node[anchor=east] {$\alpha$}
(0,1.3) node[anchor=east] {$\varphi_{0,n}$}
(0,2.3) node[anchor=east] {$\varphi_{0,n}$+$\frac{\pi}2$}
(0,3.5) node[anchor=east] {$\varphi_{1,n}$}
(0,4.5) node[anchor=east] {$\varphi_{1,n}$+$\frac{\pi}2$}; \draw[->] (0,0) -- (0,6) node[anchor=east] {\large{$\tilde\varphi_m$}}; \draw[dotted] (0.15,0) -- (0.15,6)
(1.9,0) -- (1.9,6)
(3.5,0) -- (3.5,6)
(5.5,0) -- (5.5,6)
(7,0) -- (7,6); \draw[thin] (0,1.3) -- (1.9,1.3)
(1.9,2.3) -- (3.5,2.3)
(3.5,3.5) -- (5.5,3.5)
(5.5,4.5) -- (7,4.5)
(7,5.7) -- (7.9,5.7); \draw (-0.03,0.5) -- (0.04,0.5)
(-0.03,2.3) -- (0.04,2.3)
(-0.03,3.5) -- (0.04,3.5)
(-0.03,4.5) -- (0.04,4.5);
\draw[very thick] (0,0.5) -- (0.15,0.65) -- (0.25,1.2) -- (1.8,1.4) -- (2.0,2.2)
-- (3.4,2.4) -- (3.6,3.4) -- (5.4,3.6) -- (5.6,4.4) -- (6.9,4.6) -- (7.1,5.6) -- (7.9,5.72); \draw (4.2,-0.9) node[anchor=north] {\textbf{Figure 2. Piecewise linear functions}}; \end{tikzpicture}
Mollifiers then enable us to make $\tilde{\varphi}_m$ smooth functions (and denote them by $\tilde\varphi_m$ again). Finally, the constructed functions $\tilde{\varphi}_m$ are smooth, strictly increasing functions which are linear with slope 1 near 0. This means that $\tilde{\varphi}_m$ correspond to some Schr\"odinger equations by Proposition \ref{Cor}. See Figure 3 below.
\begin{tikzpicture}[scale=0.65] \draw[->] (0,0) -- (8.5,0) node[anchor=north] { \large{$x$}}; \draw (0,0) node[anchor=north] {0}
(1.9,0) node[anchor=north] {$\frac{\lambda_{0,n}}{2^n}$}
(3.5,0) node[anchor=north] {$\frac1{2^n}$}
(5.5,0) node[anchor=north] {$\frac{1+\lambda_{1,n}}{2^n}$}
(7,0) node[anchor=north] {$\frac2{2^n}$}
(0,0.5) node[anchor=east] {$\alpha$}
(0,1.3) node[anchor=east] {$\varphi_{0,n}$}
(0,2.3) node[anchor=east] {$\varphi_{0,n}$+$\frac{\pi}2$}
(0,3.5) node[anchor=east] {$\varphi_{1,n}$}
(0,4.5) node[anchor=east] {$\varphi_{1,n}$+$\frac{\pi}2$}; \draw[->] (0,0) -- (0,6) node[anchor=east] {\large{$\tilde{\varphi}_m$}}; \draw[dotted]
(1.9,0) -- (1.9,6)
(3.5,0) -- (3.5,6)
(5.5,0) -- (5.5,6)
(7,0) -- (7,6); \draw[thin] (0,1.3) -- (1.9,1.3)
(1.9,2.3) -- (3.5,2.3)
(3.5,3.5) -- (5.5,3.5)
(5.5,4.5) -- (7,4.5)
(7,5.7) -- (7.9,5.7); \draw (-0.03,0.5) -- (0.04,0.5)
(-0.03,2.3) -- (0.04,2.3)
(-0.03,3.5) -- (0.04,3.5)
(-0.03,4.5) -- (0.04,4.5);
\draw[very thick, rounded corners] (0,0.5) -- (0.15,0.65) -- (0.25,1.2) -- (1.8,1.4) -- (2.0,2.2)
-- (3.4,2.4) -- (3.6,3.4) -- (5.4,3.6) -- (5.6,4.4) -- (6.9,4.6) -- (7.1,5.6) -- (7.9,5.72); \draw (4.2,-0.9) node[anchor=north] {\textbf{Figure 3. Smooth functions}}; \end{tikzpicture}\\
Let us now rewrite the subscript $n$ which was dropped. In the above construction, observe that \begin{equation} \label{L^1conv}
\| \tilde{\varphi}_{m,n}-\varphi_n \|_{L^1}\to 0 \end{equation} as $m\to\infty$. Since sine and cosine are uniformly continuous, (\ref{L^1conv}) implies the weak-$*$ convergence of $\tilde{H}_{m,n}$ to $H_n$, where $\tilde{H}_{m,n}=P_{\tilde{\varphi}_{m,n}}$. Now that $H_n$ converges to $H$ weak-$*$, by Proposition \ref{CovofH}, $m_{\tilde{H}_{m,n}}$ converges to $m_H$ uniformly on compact subsets of ${\mathbb C}^+$ with suitably chosen $n$ and $m$. Theorem \ref{Density} has just been proven, since $m_{\tilde{H}_{m,n}}$ are Schr\"odinger $m$ functions which converge to the given Herglotz function, as desired. \end{proof}
Remark. As talked in Sections \ref{secintro} and \ref{secDensity}, what has been seen is stronger than the main theorem; the proof above has showed that the Schr\"odinger $m$ functions with \textit{fixed $\alpha$} and \textit{smooth} potentials are dense in all Herglotz functions, since $\tilde{\varphi}_{m,n}(0)=\alpha$ at all times and $\tilde{\varphi}_{m,n}$ are all smooth, that is, the corresponding potentials are smooth by Proposition \ref{Cor}.\\
\appendix \section{Proof of Proposition \ref{CovofH}} To be self-contained we are going to prove Proposition \ref{CovofH} by following the argument in \cite{Achrt}. The extension of his argument from $\mathbb H$ to $\overline{\mathbb H}$, however, is necessary, especially to deal with $\infty$. As a cost of this extension, the proof becomes much longer than that in \cite{Achrt}.\\
Let's start with the following lemma. \begin{Lemma} \label{convergenceforH} Assume that $H_n$ converges to $H$ weak-$*$ in the sense of (\ref{weak*conv}), as $n\to\infty$, and let $u_n$ be solutions to (\ref{cs}) with $H_n$ such that the initial values $u_n(0)$ are the same for all $n$. Then the sequence $u_n$ has a subsequence which converges uniformly on any compact subsets of the half line $[0,\infty)$. Moreover, if $u$ is such a limit, then $u$ satisfies (\ref{cs}) with $H$. \end{Lemma} \begin{proof}
Let us first see that $u_n$ are uniformly bounded on any bounded intervals in $[0,\infty)$, and then the sequence $u_n$ converges in a subsequence uniformly on any compact subsets. Given a positive number $R$, assume that $|z|<R$ and put $\eta=\frac1{2R}$. Define operators $T_n$ from $C[0,\eta]$, the set of continuous (vector-valued) functions on $[0,\eta]$, to itself by \begin{equation*} (T_nu)(x)=-z\int_0^xJH_n(t)u(t)dt. \end{equation*}
Then $||T_n||\leq 1/2$. Indeed, since $J$ is unitary, the trace-normed condition $\textrm{Tr }H=1$ and positive semidefiniteness imply that $\| JH_n \| \leq 1$ and \begin{equation} \label{estT}
||T_n||
= \textrm{ sup}_{||u||_{\infty}=1} \| T_nu(x) \| \leq R\textrm{ }|x| \leq 1/2 \end{equation} for $x\in[0,\eta]$. In other words, $T_n$ are uniformly bounded in $n$. Now that the Neumann series converges and $u_n$ are solutions to (\ref{cs}) with $H_n$, \begin{equation} \label{Neumann} (I-T_n)^{-1}=\sum_{k=0}^{\infty}T_n^k, \end{equation} and \begin{equation} \label{slnu_n} u_n(x)-u_n(0)=(T_nu_n)(x), \textrm{ or} \quad u_n(x)=(I-T_n)^{-1}u_n(0). \end{equation}
Then (\ref{estT}) and (\ref{Neumann}) say that $|| (I-T_n)^{-1} ||\leq 2$, which implies that $u_n$ are uniformly bounded in $n$ on any bounded subsets of $[0,\infty)$. Due to (\ref{slnu_n}), $u_n$ are equicontinuous. By Arzela-Ascolli Theorem, the sequence $u_n$ has a subsequence which converges uniformly on compact subsets of $[0,\infty)$, say, $u_n\to u$, as $n\to\infty$. For convenience, we keep the same notation $u_n$ for the subsequence.
Let us see that $u$ satisfies (\ref{cs}) with $H$. Similar to (\ref{slnu_n}), observe that \begin{equation*} u_n(x)-u_n(0)=-z\int_0^xJH_n(t)u_n(t)dt. \end{equation*} The left-hand side goes to $u(x)-u(0)$, as $n\to\infty$, by continuity. Due to splitting the integral on the right-hand side by \begin{eqnarray*} & & \int_0^xJH_n(t)u_n(t)dt \\ &=& \underbrace{ \int_0^xJH_n(t)\big(u_n(t)-u(t)\big)dt }_{=: I} +\underbrace{\int_0^xJ\big( H_n(t)-H(t)\big)u(t)dt}_{=: II}\\ & & +\int_0^xJH(t)u(t)dt \end{eqnarray*} it suffices to show that the first two integrals $I$ and $II$ go to zero as $n\to\infty$. First, $I\to 0$ because $JH_n(t)dt$ are finite measures on $[0,x]$, and the sequence $u_n$ converges to $u$ uniformly on $[0,x]$. To see that $II\to 0$, recognize our test function through \begin{equation*} II=\int_0^{\infty}J\big( H_n(t)-H(t)\big)\chi_{[0,x]}(t)u(t)dt. \end{equation*} Since $H(t)dt$ is absolutely continuous with respect to the Lebesgue measure and, in particular, $H(t)dt$ does not have point masses, the weak-$*$ convergence works fine with any characteristic functions $\chi_{I}$ for any bounded interval $I$. Therefore $II$ goes to zero, as $n\to \infty$. This tells us that $u$ satisfies the equation \begin{equation*} u(x)-u(0)=-z\int_0^xJH(t)u(t)dt, \end{equation*} that is, $u$ is a solution to (\ref{cs}) with $H$, by the uniform convergence of $u_n$ on any compact subsets of $[0,\infty)$ and the fact that any solution to (\ref{cs}) is absolutely continuous \cite{KL,Win}.\\ \end{proof} We now prove Proposition \ref{CovofH}. \begin{proof}[Proof of Proposition \ref{CovofH}] Choose a sequence $ H_n $ which converges to $H$ weak-$*$, as $n\to\infty$. Assume first that $H$ not be the same as the constant matrix $\big( \begin{smallmatrix} 1 & 0 \\ 0 & 0 \end{smallmatrix}\big)$ on $(0,\infty)$ and that $m_{H_n}$ not converge to $\infty$, even in a subsequence. This case is discussed first because it gives us the basic idea how to prove Proposition \ref{CovofH}. It will be shown later that $H_n$ converges weak-$*$ to the constant matrix $\big( \begin{smallmatrix} 1 & 0 \\ 0 & 0 \end{smallmatrix}\big)$ on $(0,\infty)$ if and only if $m_{H_n}$ converges to $\infty$. Hence it would be enough to assume only that $H$ not be the constant matrix $\big( \begin{smallmatrix} 1 & 0 \\ 0 & 0 \end{smallmatrix}\big)$ here.
It is well known that Weyl theory can be applied to a canonical system in a similar way to that for Schr\"odinger operators (see e.g. \cite{Achrt}). More precisely, it is possible to choose $f_n=( f_{n,1} ,f_{n,2})^t$ which are $H_n$-integrable solutions to (\ref{cs}) (i.e., $f_n$ satisfy $\int_0^{\infty}f_n^*H_nf_n<\infty$) such that, for $z\in{\mathbb C}^+$, \begin{equation} \label{L2slnform} f_n(x,z)=u_n(x,z)+m_{H_n}(z)v_n(x,z) \end{equation} and \begin{equation} \label{ImwithH} \frac{\textrm{Im }m_{H_n}(z)}{\textrm{ Im }z}=\int_0^{\infty}f^*_n(x,z)H_n(x)f_n(x,z)dx \end{equation} where $m_{H_n}$ are the $m$ functions of (\ref{cs}) with $H_n$, and $u_n$ and $v_n$ are the solutions for $H_n$ satisfying the initial conditions $u_n(0)=\big( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\big)$ and $v_n(0)=\big( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\big)$. Due to (\ref{L2slnform}) and the initial values of $u_n$ and $v_n$, $m_{H_n}$ are expressed by \begin{equation} \label{expform} m_{H_n}(z)=\frac{f_{n,2}(0,z)}{f_{n,1}(0,z)} \end{equation} which is (\ref{mfnforcs}).
The sequence $f_n$ then has a convergent subsequence. Indeed, the compactness of $\overline{\mathbb H}$ implies that $m_{H_n}$ has a convergent subsequence, say, $m_{H_n}(z)\to m(z)(\neq\infty)\in\overline{\mathbb{H}}$, uniformly on compact subsets of ${\mathbb C}^+$ (for convenience we use the same notation for a subsequence). Since $H_n$ converges to $H$ weak-$*$, Lemma \ref{convergenceforH} tells us that \begin{equation*} u_n(x,z)\to u(x,z), \textrm{ and } v_n(x,z)\to v(x,z) \end{equation*} in subsequences uniformly on compact subsets in $x$ and $z$, and $u$ and $v$ are the solutions to (\ref{cs}) with $H$ satisfying $u(0)=\big( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\big)$ and $v(0)=\big( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\big)$. By (\ref{L2slnform}), $f_n$ converges in a subsequence, say to $f$ which is written by \begin{equation} \label{expform2} f(x,z):=u(x,z)+m(z)v(x,z). \end{equation}
It is sufficient to show that $f$ is $H$-integrable. Because, if $f$ is $H$-integrable, the similar formula to (\ref{expform}) for $f$ and $m_H$ and (\ref{expform2}) indicate that \begin{equation*} m(z)=\frac{f_{2}(0,z)}{f_{1}(0,z)}=m_{H}(z), \end{equation*} which says that $m_H$ is the only possible limit. This is followed by the fact that a trace-normed canonical system is always in a limit point case at $\infty$. Therefore $m_{H_n}$ converges to $m_H$ uniformly on compact subsets of ${\mathbb C}^+$.
Let's see $H$-integrability of $f$. Since $m(z)\neq\infty$, due to (\ref{ImwithH}), the sequence $\int_0^{\infty}f_n^*H_nf_n$ converges to some nonnegative number, as $n\to\infty$. In particular, the quantities $\int_0^{\infty}f_n^*H_nf_n$ are uniformly bounded for $n$, say, by $M$. Let us then see that, for all $n$, \begin{eqnarray*} M &\ge& \int_0^{\infty}f_n^*H_nf_n \\
&\ge& \int_0^Lf_n^*H_nf_n \quad \textrm{ for all positive number } L\\
&=& \int_0^L(f_n-f)^*H_nf_n+\int_0^Lf_n^*H_n(f_n-f)+\int_0^Lf^*H_nf. \end{eqnarray*} Since $f_n$ converges to $f$ locally uniformly and $H(x)dx$ is absolutely continuous with respect to the Lebesgue measure, the weak-$*$ convergence of $H_n$ says that the first and second integrals go to zero and the third converges to $\int_0^Lf^*Hf$, as $n\to\infty$. By taking $L\to\infty$, $f$ is $H$-integrable, for these inequalities are all uniform in both $n$ and $L$.\\
So far it has been showed that, if $H_n$ converges to $H$ weak-$\ast$, then $m_{H_n}$ converges to $m_H$, when the limit matrix $H$ is not the constant matrix $\big(\begin{smallmatrix} 1 & 0 \\ 0 & 0 \end{smallmatrix}\big)$ and $m_{H_n}$ does not converge to $\infty$, even in a subsequence.
It is the time to deal with the special case when the limit matrix $H$ is $\big( \begin{smallmatrix} 1 & 0 \\ 0 & 0 \end{smallmatrix}\big)$ on $(0,\infty)$, denoted by $H_{\infty}$, since its corresponding $m$ function is $\infty$. Let us show that $H_n$ converges to $H_{\infty}$ weak-$*$ if and only if $m_{H_n}$ converges to $\infty$. Before proving, note that, because of this equivalence, it was enough to assume that $H_n$ did converge to $H$ which was not $H_{\infty}$ in the previous case, as talked before.
Observe that \begin{equation} \label{zeroofv} \int_0^L v^*Hv \textrm{ } dx = 0 \quad\textrm{if and only if}\quad H=H_{\infty} \textrm{ on } (0,L) \end{equation} where $v$ is the solution to (\ref{cs}) with $H$ satisfying $v(0)=\big( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\big)$. Indeed, for any open interval $I$ in $(0,\infty)$ it is well known to have either that \begin{equation*} \int_I e^*H e\textrm{ } dx=0,\textrm{ } e\in{\mathbb C}^2 \textrm{ implies that } e=\big( \begin{smallmatrix} 0 \\ 0 \end{smallmatrix}\big) \end{equation*} (i.e., $I$ is \textit{of positive type} in \cite{HSW}), or that $I$ is a singular interval of type $\theta$, in other words, $H$ is the constant matrix $P_{\theta}$ \begin{equation} \label{singularinterval} P_{\theta}:= \begin{pmatrix} \cos^2\theta& \cos\theta \sin \theta \\ \cos \theta \sin \theta & \sin^2\theta \end{pmatrix} \end{equation} almost everywhere on $I$ for some $\theta$ in $[0,\pi)$ (i.e., $I$ is \textit{$H$-invisible of type $\theta$} in \cite{HSW}). See lemma 3.1 in \cite{HSW} for more details. Because of the continuity of $v$ and the initial condition $v(0)=\big( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\big)$, if $\int_0^L v^*Hv \textrm{ } dx = 0$, then $\theta=0$ on $(0,L)$, which shows the sufficiency of (\ref{zeroofv}). Since $v_{\infty}(x)= \big( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\big)$ on $(0,L)$, where $v_{\infty}$ is the solution to (\ref{cs}) with $H_{\infty}$ and the same initial condition of $v$, we have the necessity.
Let us verify that, if $H_n$ converges to $H_{\infty}$ weak-$*$, then $m_{H_n}$ converges to $\infty$. Suppose that $m_{H_n}$ does not converge to $\infty$. By the compactness we can choose $m\in\overline{\mathbb{H}}\setminus \{ \infty \} $ such that $m_{H_n}$ converges to $m$ at least in a subsequence. (For convenience we keep the same notation for the subsequence.) Due to (\ref{L2slnform}) and (\ref{ImwithH}), see that, for any finite number $L>0$, \begin{eqnarray*} \frac{\textrm{Im } m_{H_n}}{\textrm{ Im }z} &=&\int_0^{\infty}f_n^*H_nf_n\\ &\ge& \int_0^{L}f_n^*H_nf_n\\ &=& \int_0^{L} u_n^* H_n u_n+m_{H_n}\int_0^{L} u_n^* H_n v_n\\
&& +\bar{m}_{H_n}\int_0^{L} v_n^* H_n u_n+|m_{H_n}|^2\int_0^{L} v_n^* H_n v_n. \end{eqnarray*} By taking $n\to\infty$, Lemma \ref{convergenceforH} tells that, at least in a subsequence, \begin{eqnarray*} \frac{\textrm{Im } m}{\textrm{ Im }z} &\ge& \int_0^{L} u_{\infty}^* H_{\infty} u_{\infty}+m \int_0^{L} u_{\infty}^* H_{\infty} v_{\infty}\\
&& +\bar{m}\int_0^{L} v_{\infty}^* H_{\infty} u_{\infty}+|m|^2\int_0^{L} v_{\infty}^* H_{\infty} v_{\infty}\\ &=&\int_0^L 1 \textrm{ }dx \end{eqnarray*} for any finite number $L>0$, where $u_{\infty}$ and $v_{\infty}$ are the solutions for $H_{\infty}$ satisfying $u_{\infty}(0)=\big( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\big)$ and $v_{\infty}(0)=\big( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\big)$. Since the transfer matrix is $I_2-zxJP_0=\big(\begin{smallmatrix} 1&0\\-zx&1 \end{smallmatrix}\big)$ for $x>0$, direct computation says that $u_{\infty}(x)=\big( \begin{smallmatrix} 1 \\ -zx \end{smallmatrix}\big)$ and $v_{\infty}(x)=\big( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\big)$, which implies the last equality. Due to choosing $L$ freely, the last integral, $\int_0^L 1\textrm{ } dx=L$, can be arbitrarily large, which contradicts that the left-hand side, $\frac{\textrm{Im } m}{\textrm{ Im }z}$, is a finite number which is independent of $L$. Therefore, if $H_n$ converges to $H_{\infty}$ weak-$*$, then $m_{H_n}$ converges to $\infty$.
The remaining is to show that, if $m_{H_n}$ converges to $\infty$, then $H_n$ converges to $H_{\infty}$ weak-$*$. Put $\tilde{f}_n=-\tilde{m}_{H_n}u_n+v_n$ where $\tilde{m}_{H_n}(z)=-\frac1{m_{H_n}(z)}$. In other words, since $m_{H_n}\to\infty$, (\ref{L2slnform}) cannot be used directly. Let us, instead, consider the negative reciprocals of $m_{H_n}$. Indeed, similar to (\ref{ImwithH}), it satisfies the equality \begin{equation} \label{ImwithH2} \frac{\textrm{Im }\tilde{m}_{H_n}(z)}{\textrm{ Im }z}=\int_0^{\infty}\tilde{f}^*_n(x,z)H_n(x)\tilde{f}_n(x,z)dx. \end{equation} For any finite number $L>0$, see that \begin{eqnarray*} \frac{\textrm{Im }\tilde{m}_{H_n}}{\textrm{ Im }z} &=& \int_0^{\infty}\tilde{f}_n^* H_n\tilde{f}_n\\ &\ge& \int_0^{L}\tilde{f}_n^* H_n\tilde{f}_n\\
&=& |\tilde{m}_{H_n}|^2\int_0^{L} u_n^* H_n u_n-\overline{\tilde{m}}_{H_n}\int_0^{L} u_n^* H_n v_n\\ && -\tilde{m}_{H_n}\int_0^{L} v_n^* H_n u_n+\int_0^{L} v_n^* H_n v_n. \end{eqnarray*} Since $\tilde{m}_{H_n} \to 0$, the left hand side on (\ref{ImwithH2}) converges to $0$. The compactness of ${\mathbb V}_+$ and Lemma \ref{convergenceforH} indicate that, for $L>0$, \begin{equation*} \int_0^{L}\tilde{f}^*_n H_n\tilde{f}_n \rightarrow \int_0^L v^*Hv \end{equation*} as $n\to \infty$, at least in a subsequence. Hence, whenever $H_n$ converges to $H$ weak-$*$ in a subsequence, we have shown that \begin{equation*} \int_0^L v^*Hv=0 \end{equation*} for any $L>0$, which, with (\ref{zeroofv}), implies that $H=H_{\infty}$ on $(0,\infty)$. Everything has then been done, since $H_{\infty}$ is the only possible limit.\\ \end{proof}
\textit{Acknowledgement.} I am grateful to Christian Remling for useful discussion in my work and for all the help and support he has given me. It is also a pleasure to thank Matt McBride for important suggestions and friendly encouragement. This research was partly supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MOE) (No. 2014R1A1A2058848).\\
\end{document} | arXiv |
# Understanding array indexing and its applications
In Python, arrays are zero-indexed, which means that the first element of an array has an index of 0. To access an element of an array, we use square brackets [] and specify the index of the element we want to access. For example, to access the first element of an array called `my_array`, we would write `my_array[0]`.
Consider the following array:
```python
my_array = [1, 2, 3, 4, 5]
```
To access the second element of `my_array`, we would write `my_array[1]`. This would return the value `2`.
Array indexing can also be used to modify elements of an array. We can assign a new value to an element by using the assignment operator (=). For example, to change the third element of `my_array` to `10`, we would write `my_array[2] = 10`.
Continuing with the previous example, if we write `my_array[2] = 10`, the array `my_array` would now be `[1, 2, 10, 4, 5]`.
## Exercise
Create an array called `grades` with the following values: `90, 85, 95, 80, 75`. Then, change the fourth element of `grades` to `90`.
### Solution
```python
grades = [90, 85, 95, 80, 75]
grades[3] = 90
```
# Eigenvalues and their significance in linear algebra
In linear algebra, an eigenvalue of a square matrix represents a scalar value that, when multiplied by a corresponding eigenvector, results in the same vector. Mathematically, if A is a square matrix, λ is an eigenvalue of A, and v is the corresponding eigenvector, then the following equation holds:
$$Av = \lambda v$$
Consider the following matrix:
$$A = \begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix}$$
To find the eigenvalues of A, we need to solve the equation:
$$Av = \lambda v$$
Let's assume that v is a column vector [x, y]. Substituting this into the equation, we get:
$$\begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \lambda \begin{bmatrix} x \\ y \end{bmatrix}$$
Simplifying the equation, we get:
$$\begin{align*} 2x + y &= \lambda x \\ x + 3y &= \lambda y \end{align*}$$
This can be written as a system of linear equations:
$$\begin{align*} (2 - \lambda)x + y &= 0 \\ x + (3 - \lambda)y &= 0 \end{align*}$$
To find the eigenvalues, we need to find the values of λ that make the determinant of the coefficient matrix equal to 0. In this case, the determinant is:
$$\begin{vmatrix} 2 - \lambda & 1 \\ 1 & 3 - \lambda \end{vmatrix} = (2 - \lambda)(3 - \lambda) - 1 = \lambda^2 - 5\lambda + 5$$
Setting the determinant equal to 0 and solving for λ, we find that the eigenvalues of A are:
$$\lambda_1 = \frac{5 + \sqrt{5}}{2} \quad \text{and} \quad \lambda_2 = \frac{5 - \sqrt{5}}{2}$$
## Exercise
Find the eigenvalues of the matrix A:
$$A = \begin{bmatrix} 4 & -1 \\ 2 & 3 \end{bmatrix}$$
### Solution
To find the eigenvalues, we need to find the values of λ that make the determinant of the coefficient matrix equal to 0. In this case, the determinant is:
$$\begin{vmatrix} 4 - \lambda & -1 \\ 2 & 3 - \lambda \end{vmatrix} = (4 - \lambda)(3 - \lambda) + 2 = \lambda^2 - 7\lambda + 10$$
Setting the determinant equal to 0 and solving for λ, we find that the eigenvalues of A are:
$$\lambda_1 = 5 \quad \text{and} \quad \lambda_2 = 2$$
# Solving systems of equations using matrices and arrays
A system of linear equations can be represented using matrices and arrays. Let's consider the following system of equations:
$$\begin{align*} 2x + y &= 5 \\ x - y &= 1 \end{align*}$$
We can represent this system using matrices and arrays as follows:
$$\begin{bmatrix} 2 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 5 \\ 1 \end{bmatrix}$$
In this representation, the matrix on the left-hand side represents the coefficients of the variables, the vector on the right-hand side represents the constants, and the vector in the middle represents the unknown variables.
To solve the system of equations:
$$\begin{align*} 2x + y &= 5 \\ x - y &= 1 \end{align*}$$
We can write the system in matrix form:
$$\begin{bmatrix} 2 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 5 \\ 1 \end{bmatrix}$$
To solve for x and y, we can use matrix inversion. The solution is given by:
$$\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 2 & 1 \\ 1 & -1 \end{bmatrix}^{-1} \begin{bmatrix} 5 \\ 1 \end{bmatrix}$$
Calculating the inverse of the coefficient matrix and multiplying it by the constant vector, we find that the solution is:
$$\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 3 \\ 2 \end{bmatrix}$$
## Exercise
Solve the following system of equations using matrices and arrays:
$$\begin{align*} 3x + 2y &= 8 \\ 2x - y &= 1 \end{align*}$$
### Solution
To solve the system of equations, we can write it in matrix form:
$$\begin{bmatrix} 3 & 2 \\ 2 & -1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 8 \\ 1 \end{bmatrix}$$
To solve for x and y, we can use matrix inversion. The solution is given by:
$$\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 3 & 2 \\ 2 & -1 \end{bmatrix}^{-1} \begin{bmatrix} 8 \\ 1 \end{bmatrix}$$
Calculating the inverse of the coefficient matrix and multiplying it by the constant vector, we find that the solution is:
$$\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 2 \\ 3 \end{bmatrix}$$
# Basic matrix operations: addition, subtraction, multiplication
Matrix addition and subtraction are straightforward operations. To add or subtract two matrices, we simply add or subtract the corresponding elements. The matrices must have the same dimensions.
Consider the following matrices:
$$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \quad \text{and} \quad B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$$
To add A and B, we add the corresponding elements:
$$A + B = \begin{bmatrix} 1 + 5 & 2 + 6 \\ 3 + 7 & 4 + 8 \end{bmatrix} = \begin{bmatrix} 6 & 8 \\ 10 & 12 \end{bmatrix}$$
To subtract B from A, we subtract the corresponding elements:
$$A - B = \begin{bmatrix} 1 - 5 & 2 - 6 \\ 3 - 7 & 4 - 8 \end{bmatrix} = \begin{bmatrix} -4 & -4 \\ -4 & -4 \end{bmatrix}$$
Matrix multiplication is a more complex operation. To multiply two matrices, we multiply the corresponding elements and sum the products. The number of columns in the first matrix must be equal to the number of rows in the second matrix.
Consider the following matrices:
$$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \quad \text{and} \quad B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$$
To multiply A and B, we multiply the corresponding elements and sum the products:
$$AB = \begin{bmatrix} 1 \cdot 5 + 2 \cdot 7 & 1 \cdot 6 + 2 \cdot 8 \\ 3 \cdot 5 + 4 \cdot 7 & 3 \cdot 6 + 4 \cdot 8 \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}$$
## Exercise
Perform the following matrix operations:
1. Add the matrices A and B:
$$A = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix} \quad \text{and} \quad B = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$
2. Subtract the matrix B from A.
3. Multiply the matrices A and B:
$$A = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix} \quad \text{and} \quad B = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$
### Solution
1. To add the matrices A and B, we add the corresponding elements:
$$A + B = \begin{bmatrix} 2 + 1 & 3 + 2 \\ 4 + 3 & 5 + 4 \end{bmatrix} = \begin{bmatrix} 3 & 5 \\ 7 & 9 \end{bmatrix}$$
2. To subtract the matrix B from A, we subtract the corresponding elements:
$$A - B = \begin{bmatrix} 2 - 1 & 3 - 2 \\ 4 - 3 & 5 - 4 \end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$$
3. To multiply the matrices A and B, we multiply the corresponding elements and sum the products:
$$AB = \begin{bmatrix} 2 \cdot 1 + 3 \cdot 3 & 2 \cdot 2 + 3 \cdot 4 \\ 4 \cdot 1 + 5 \cdot 3 & 4 \cdot 2 + 5 \cdot 4 \end{bmatrix} = \begin{bmatrix} 11 & 16 \\ 19 & 26 \end{bmatrix}$$
# Linear algebra and its role in numerical computing
Linear algebra provides a powerful framework for solving systems of linear equations, finding eigenvalues and eigenvectors, performing matrix operations, and analyzing the properties of linear transformations. These techniques are widely used in various fields, including physics, engineering, computer science, and data analysis.
One application of linear algebra is in solving systems of linear equations. Linear equations are equations in which the variables appear only with a power of 1 and are not multiplied or divided by each other. For example, the equation 2x + 3y = 7 is a linear equation.
To solve a system of linear equations, we can represent the equations using matrices and arrays. We can then use matrix inversion or other techniques to find the solution.
Linear algebra also allows us to analyze the behavior of linear transformations. A linear transformation is a function that maps vectors to vectors and preserves the properties of linearity, such as scaling and addition. Linear transformations can be represented using matrices, and their properties can be studied using linear algebra techniques.
## Exercise
Think of a real-world problem that can be formulated and solved using linear algebra techniques. Describe the problem and explain how linear algebra can be used to solve it.
### Solution
One example of a real-world problem that can be solved using linear algebra is image compression. Image compression is the process of reducing the size of an image file without significantly degrading its quality. This is important for applications such as image storage, transmission, and display.
Linear algebra can be used to compress images by representing them as matrices and performing matrix operations. Techniques such as singular value decomposition (SVD) can be used to decompose an image matrix into a set of basis vectors, which capture the essential features of the image. By retaining only a subset of these basis vectors, we can achieve compression.
# Advanced matrix operations: transpose, inverse, determinant
The transpose of a matrix is obtained by interchanging its rows and columns. It is denoted by adding a superscript T to the matrix. The transpose of a matrix A is denoted as A^T.
Consider the following matrix:
$$A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}$$
The transpose of A is:
$$A^T = \begin{bmatrix} 1 & 4 \\ 2 & 5 \\ 3 & 6 \end{bmatrix}$$
The inverse of a square matrix A is a matrix that, when multiplied by A, gives the identity matrix. The inverse of A is denoted as A^{-1}. Not all matrices have an inverse. A matrix is invertible if and only if its determinant is non-zero.
Consider the following matrix:
$$A = \begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix}$$
To find the inverse of A, we can use the formula:
$$A^{-1} = \frac{1}{\text{det}(A)} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}$$
where a, b, c, and d are the elements of A and det(A) is the determinant of A.
In this case, the determinant of A is:
$$\text{det}(A) = 2 \cdot 3 - 1 \cdot 1 = 5$$
Therefore, the inverse of A is:
$$A^{-1} = \frac{1}{5} \begin{bmatrix} 3 & -1 \\ -1 & 2 \end{bmatrix}$$
The determinant of a square matrix is a scalar value that is computed using the elements of the matrix. The determinant is denoted as det(A) or |A|. It has several important properties, such as being zero if and only if the matrix is singular.
Consider the following matrix:
$$A = \begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix}$$
The determinant of A is:
$$\text{det}(A) = 2 \cdot 3 - 1 \cdot 1 = 5$$
## Exercise
Find the transpose, inverse, and determinant of the matrix A:
$$A = \begin{bmatrix} 4 & -1 \\ 2 & 3 \end{bmatrix}$$
### Solution
The transpose of A is:
$$A^T = \begin{bmatrix} 4 & 2 \\ -1 & 3 \end{bmatrix}$$
To find the inverse of A, we can use the formula:
$$A^{-1} = \frac{1}{\text{det}(A)} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}$$
where a, b, c, and d are the elements of A and det(A) is the determinant of A.
In this case, the determinant of A is:
$$\text{det}(A) = 4 \cdot 3 - (-1) \cdot 2 = 14$$
Therefore, the inverse of A is:
$$A^{-1} = \frac{1}{14} \begin{bmatrix} 3 & 1 \\ -2 & 4 \end{bmatrix}$$
The determinant of A is:
$$\text{det}(A) = 4 \cdot 3 - (-1) \cdot 2 = 14$$
# Applications of matrix operations in real-world problems
One application of matrix operations is in solving systems of linear equations. Systems of linear equations arise in many real-world problems, such as circuit analysis, optimization, and data fitting. By representing the equations using matrices and arrays, we can use matrix operations to find the solution.
Consider the following system of linear equations:
$$\begin{align*} 2x + 3y &= 7 \\ 4x - 2y &= 2 \end{align*}$$
We can represent this system using matrices and arrays as follows:
$$\begin{bmatrix} 2 & 3 \\ 4 & -2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 7 \\ 2 \end{bmatrix}$$
To solve for x and y, we can use matrix inversion. The solution is given by:
$$\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 2 & 3 \\ 4 & -2 \end{bmatrix}^{-1} \begin{bmatrix} 7 \\ 2 \end{bmatrix}$$
Another application of matrix operations is in data analysis. Matrices can be used to represent data sets, and matrix operations can be used to analyze and manipulate the data. Techniques such as matrix factorization, singular value decomposition (SVD), and principal component analysis (PCA) are widely used in data analysis.
## Exercise
Think of a real-world problem that can be solved using matrix operations. Describe the problem and explain how matrix operations can be used to solve it.
### Solution
One example of a real-world problem that can be solved using matrix operations is image processing. Image processing is the analysis and manipulation of digital images. It has applications in various fields, such as medicine, surveillance, and entertainment.
Matrix operations can be used to perform various image processing tasks, such as image enhancement, image restoration, and image compression. For example, matrix operations can be used to apply filters to an image, remove noise, and reduce the size of an image file without significantly degrading its quality.
# Solving linear equations using matrix methods
To solve a system of linear equations, we can represent the equations using matrices and arrays. The system can be written in the form:
$$Ax = b$$
where A is a matrix of coefficients, x is a vector of variables, and b is a vector of constants.
To find the solution, we can use matrix inversion. If the matrix A is invertible, we can multiply both sides of the equation by the inverse of A to isolate x:
$$x = A^{-1}b$$
The solution x will be a vector that satisfies the system of equations.
Let's consider the following system of linear equations:
$$\begin{align*} 2x + 3y &= 7 \\ 4x - 2y &= 2 \end{align*}$$
We can represent this system using matrices and arrays as follows:
$$\begin{bmatrix} 2 & 3 \\ 4 & -2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 7 \\ 2 \end{bmatrix}$$
To find the solution, we need to calculate the inverse of the coefficient matrix A:
$$A^{-1} = \begin{bmatrix} 2 & 3 \\ 4 & -2 \end{bmatrix}^{-1}$$
Then, we can multiply the inverse by the constant vector b to find the solution x:
$$x = \begin{bmatrix} 2 & 3 \\ 4 & -2 \end{bmatrix}^{-1} \begin{bmatrix} 7 \\ 2 \end{bmatrix}$$
## Exercise
Solve the following system of linear equations using matrix methods:
$$\begin{align*} 3x + 2y &= 8 \\ 5x - 4y &= 1 \end{align*}$$
Write the solution as a vector.
### Solution
The solution to the system of linear equations is:
$$\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 2 \\ 3 \end{bmatrix}$$
# Eigenvalue decomposition and its applications
Eigenvalue decomposition is a powerful technique in linear algebra that allows us to decompose a matrix into its eigenvalues and eigenvectors. This decomposition is useful in many applications, such as solving systems of linear equations, analyzing dynamical systems, and performing dimensionality reduction.
To understand eigenvalue decomposition, we first need to understand what eigenvalues and eigenvectors are.
An eigenvector of a matrix A is a non-zero vector x that satisfies the equation:
$$Ax = \lambda x$$
where $\lambda$ is a scalar known as the eigenvalue. In other words, when we multiply the matrix A by its eigenvector x, the result is a scaled version of the eigenvector.
To find the eigenvalues and eigenvectors of a matrix, we can solve the characteristic equation:
$$|A - \lambda I| = 0$$
where A is the matrix, $\lambda$ is the eigenvalue, and I is the identity matrix. The solutions to this equation are the eigenvalues of the matrix. Once we have the eigenvalues, we can find the corresponding eigenvectors by solving the equation:
$$(A - \lambda I)x = 0$$
Let's consider the following matrix:
$$A = \begin{bmatrix} 3 & 1 \\ 1 & 2 \end{bmatrix}$$
To find the eigenvalues, we solve the characteristic equation:
$$\begin{vmatrix} 3 - \lambda & 1 \\ 1 & 2 - \lambda \end{vmatrix} = 0$$
Expanding the determinant, we get:
$$(3 - \lambda)(2 - \lambda) - 1 = 0$$
Simplifying, we find:
$$\lambda^2 - 5\lambda + 5 = 0$$
Solving this quadratic equation, we find two eigenvalues: $\lambda_1 = 4$ and $\lambda_2 = 1$.
To find the eigenvectors, we solve the equation:
$$(A - \lambda I)x = 0$$
For $\lambda_1 = 4$, we have:
$$\begin{bmatrix} -1 & 1 \\ 1 & -2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = 0$$
Solving this system of equations, we find the eigenvector corresponding to $\lambda_1 = 4$ as:
$$x_1 = 1, x_2 = 1$$
Similarly, for $\lambda_2 = 1$, we have:
$$\begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = 0$$
Solving this system of equations, we find the eigenvector corresponding to $\lambda_2 = 1$ as:
$$x_1 = -1, x_2 = 1$$
## Exercise
Find the eigenvalues and eigenvectors of the following matrix:
$$B = \begin{bmatrix} 2 & 0 \\ 0 & 3 \end{bmatrix}$$
Write the eigenvalues as a vector and the eigenvectors as a matrix, where each column represents an eigenvector.
### Solution
The eigenvalues of matrix B are:
$$\begin{bmatrix} 2 \\ 3 \end{bmatrix}$$
The eigenvectors of matrix B are:
$$\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$$
# Using arrays to store data and perform calculations
To create an array in Python, we can use the `numpy` library. Numpy provides a powerful array object called `ndarray` that allows us to perform mathematical operations on arrays efficiently.
To create a one-dimensional array, we can use the `array` function from numpy:
```python
import numpy as np
a = np.array([1, 2, 3, 4, 5])
```
This creates an array `a` with the elements `[1, 2, 3, 4, 5]`. We can access individual elements of the array using indexing:
```python
print(a[0]) # prints 1
print(a[2]) # prints 3
```
We can also perform operations on arrays, such as addition, subtraction, multiplication, and division:
```python
b = np.array([6, 7, 8, 9, 10])
c = a + b
print(c) # prints [7, 9, 11, 13, 15]
d = a * b
print(d) # prints [6, 14, 24, 36, 50]
```
Let's say we have two arrays `x` and `y` that represent the x and y coordinates of points in a 2D plane. We want to calculate the Euclidean distance between each pair of points.
```python
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
# Calculate the Euclidean distance between each pair of points
distances = np.sqrt((x - x[:, np.newaxis])**2 + (y - y[:, np.newaxis])**2)
print(distances)
```
This will output:
```
[[0. 1.41421356 2.82842712]
[1.41421356 0. 1.41421356]
[2.82842712 1.41421356 0. ]]
```
The `distances` array represents the distances between each pair of points. For example, `distances[0, 1]` is the distance between the first and second points, which is `1.41421356`.
## Exercise
Create a 2D array `a` with the following elements:
```
1 2 3
4 5 6
7 8 9
```
Access the element in the second row and third column.
### Solution
```python
a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
element = a[1, 2]
print(element) # prints 6
```
# Optimization and efficiency in numerical computing
One common approach to optimization is algorithmic optimization. This involves finding more efficient algorithms or data structures to solve a problem. For example, if we need to perform a matrix multiplication, we can use the Strassen algorithm instead of the standard algorithm to reduce the number of multiplications required. Similarly, we can use sparse matrices to represent large matrices with many zero elements, which can significantly reduce memory usage and computation time.
Another approach to optimization is code optimization. This involves writing code in a way that takes advantage of hardware features and optimizations. For example, we can use vectorization to perform operations on arrays using SIMD (Single Instruction, Multiple Data) instructions, which can greatly improve performance. We can also use parallel computing techniques, such as multi-threading or GPU computing, to distribute computations across multiple processors or cores.
In addition to algorithmic and code optimization, there are other techniques that can improve efficiency in numerical computing. These include:
- Memory management: Efficiently managing memory usage can reduce the overhead of memory allocation and deallocation, improving performance.
- Caching: Utilizing cache memory effectively can minimize data transfer between the CPU and memory, speeding up computations.
- Loop optimization: Optimizing loops by reducing unnecessary computations or loop unrolling can improve performance.
- Profiling and benchmarking: Profiling tools can help identify performance bottlenecks and optimize critical sections of code. Benchmarking can be used to compare the performance of different implementations or algorithms.
Let's consider an example to illustrate the importance of optimization in numerical computing. Suppose we have a large dataset consisting of millions of records, and we need to calculate the sum of a specific column. We can use a simple loop to iterate over each record and accumulate the sum. However, this approach can be slow and inefficient.
```python
import numpy as np
# Generate a large dataset
data = np.random.rand(1000000, 10)
# Calculate the sum of the second column using a loop
sum_column = 0
for row in data:
sum_column += row[1]
print(sum_column)
```
This code will work correctly, but it may take a long time to execute due to the large size of the dataset. We can optimize this code by using numpy's vectorized operations to perform the sum directly on the column:
```python
sum_column = np.sum(data[:, 1])
print(sum_column)
```
This code will produce the same result, but it will be much faster and more efficient, especially for large datasets.
## Exercise
Consider the following code that calculates the dot product of two vectors using a loop:
```python
import numpy as np
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
dot_product = 0
for i in range(len(a)):
dot_product += a[i] * b[i]
print(dot_product)
```
Rewrite the code using numpy's vectorized operations to calculate the dot product.
### Solution
```python
dot_product = np.dot(a, b)
print(dot_product)
``` | Textbooks |
Judge:Writing
From PEGWiki
Revision as of 06:15, 1 June 2012 by Brian (Talk | contribs) (→LaTeX)
This page documents current best practices for writing problems and analyses on the PEG Judge.
1 Problem statements
1.1 General guidelines
1.2 Heading
1.4 Input
1.5 Output
1.6 Sample data
1.7 Grading
2 Analyses
2.1 LaTeX
3 Test data
Problem statements
As a rule, problem statements, written in HTML, should adhere to the PEG Judge Standard Format, in order to give them a consistent look and feel that can also be consistently modified by CSS should the PEG Judge later introduce multiple skins for its interface. This is specified in the sections to follow.
The document should be valid XHTML 1.0 Transitional. This means, among other things:
No non-ASCII characters should appear directly in the source; they must appear as HTML entities. Watch out for fancy quotation marks, in particular; these are unwelcome. Use the standard apostrophes (') and ASCII double quotes (") instead.
Tags should not overlap.
All tags should be closed. <p> tags, in particular, must be closed. It is true that browsers can render pages correctly even when they are not closed, but it's bad practice not to close them.
All images should have alt-text, even if it is blank.
Use scientific notation rather than long strings of zeroes; for example, 2×109 instead of 2000000000. Do not use commas (as in 2,000,000,000) or metric spacing (as in 2 000 000 000).
Do not link to external images. Link to images that are stored on the PEG Judge server in the images directory. (Don't worry about the absolute path; this is taken care of using some server-side magic.)
Use LaTeX, if you think it will be helpful. See here.
The sections listed below should occur in the order listed.
The title of the problem should be a second-level header (<h2>) found at the top of the document.
Unless the PEG Judge is the place where this particular problem originally appeared, the source of the problem should be a third-level header (<h3>) that immediately precedes the title (such as "2003 Canadian Computing Competition, Stage 1").
The problem statement body should immediately follow the title, with no heading.
Lines should be kept to a maximum of 80 characters when it is reasonably practical, as is standard practice in coding.
Variables should be enclosed in <var> tags, rather than the less semantically meaningful <i> tag, or the semantically misleading <em> tag.
No bounds should appear in the body.
<p> tags should always be used for paragraphing, instead of manually inserted <br/>. However, this does not mean that using line breaks is disallowed.
Immediately following the problem statement body should be the input section, headed by <h3>Input</h3>. This section should describe the input format clearly and unambiguously. It should refer to variables given in input by the same names that are used in the problem description body, and it should mention them in the order in which they appear in the input file. It should state the type and bounds of each variable as it is given, in a format that conforms closely to this example:
An integer <var>T</var> (1 ≤ <var>T</var> ≤ 100), indicating the number of lines to follow.
The features of this example that should be imitated in problem statements are as follows:
The variable is enclosed in <var> tags.
The less-than-or-equal-to operator character appears as its HTML entity (not as the character itself) in the source.
The qualifier "integer" appears, to let the reader know that T is not a floating-point number or a string.
The bounds are given in parentheses, immediately after the mention of the variable itself. The non-breaking space character is used to prevent the expression in the parentheses from being broken across lines.
If any input data are strings, the input section should specify exactly which characters are allowed to appear in said strings.
Immediately following the input section should be the output section, headed by <h3>Output</h3>. This section should describe the output format clearly and unambiguously. In particular:
If there can be multiple possible solutions to input test cases, it should either specify that any solution should be accepted, or that all solutions should be produced. In the latter case it should specify whether they are to be output in a specific order, or whether any order will be accepted.
If there can, in general, be multiple possible solutions to possible test inputs, but the actual test inputs are chosen in such a way that there will a unique solution to each one, then this fact should be indicated.
If any output is a floating-point number, the output section should specify the acceptable absolute and/or relative margins of error; or it should specify that the output is to be given to a certain number of decimal places.
It is implied that the rounding convention is round-to-even. If output is expected to adhere to a different rounding convention (such as ↓4/5↑ or round-to-zero a.k.a. truncate), this should be clearly indicated.
The output section should be immediately followed by sample data. Sample data should always be given in a fixed-width font; it is strongly recommended that you use the <pre> tag. There is, however, some flexibility in organization:
If there is only one sample test case, it's usually best to have a Sample Input section (headed by <h3>Sample Input</h3>), containing the sample input, followed by a Sample Output section (headed by <h3>Sample Output</h3>), containing the sample output.
If there are multiple test cases, one option is to have multiple pairs of sections (hence, Sample Input 1, Sample Output 1, Sample Output 2, Sample Output 2, and so on); another is to have a single section "Sample Cases" with a table that organizes the input-output pairs in a nice way.
If the possibility of earning partial credit exists for your problem, you should include a section about grading, headed by <h3>Grading</h3>, to give extra information about how partial credit may be earned on a test case (typically by outputting valid but suboptimal output) along with a precise specification of the formula used to calculate partial credit.
At your liberty, you may also explain how partial credit may be earned on the problem as a whole; typically, by stating some guarantees about the score that can be obtained by solving small test cases with suboptimal algorithms (e.g., "Test cases that contain no more than 300 lines of input each will make up at least 30% of the points for this problem.") If you are copying the problem from a source such as the IOI that gives such partial credit information, you should also copy such partial credit information into this section.
This is also the section where additional information about, e.g., interactivity should go. For example, a reminder to the reader to flush the standard output stream after each newline.
Analyses should adhere to all the general guidelines for problem statements. You are also encouraged to include a byline that includes either your real name or your PEG Judge handle (or both). The format is otherwise flexible.
Unless the problem is very open-ended, the analysis should discuss the algorithmic and/or implementational techniques required to write a parsimonious solution that will achieve a full score on the problem ("model solution"), preferably with a source link to an implementation of said solution, preferably written in an IOI language (C, C++, or Pascal). It should similarly discuss any sub-optimal solutions hinted to in the "Grading" section of the problem statement (that is, simpler solutions that still achieve reasonable scores). Alternative solutions that can also earn full marks may also be discussed.
The algorithmic aspect of the solution explanation should give a complete high-level overview. Because the analysis will only be viewed by users who have solved the problem, it should be aimed at such an audience and should not burden the reader with details that will seem obvious. It should outline exactly enough details so that the reader will be able to implement the model solution after reading and understanding the analysis. For example, it should identify key data structures and basic well-known algorithms, but should avoid delving into their details unless they are novel. (The reader should be expected to be familiar with, e.g, binary indexed trees and Dijkstra's algorithm.) It should analyze the asymptotic time and space complexity of the solutions presented.
The implementational aspect can often be omitted, but should be included if simple implementations of the model algorithm may fail to earn full marks from want of constant optimization.
If you wish, you can also discuss the nature of the actual test data, heuristics that would work well or have been found to work well for those particular inputs, and tricky cases.
You can include mathematical formulae directly on analyses, just as you can include them on this wiki. For example,
<latex>$x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$</latex>
gives something similar to .
Specifically, the string enclosed in the tags is inserted into the template:
\documentclass[11pt]{article}
\usepackage[paperwidth=8.5in, paperheight=100in]{geometry}
\pagestyle{empty}
% your code goes here
which is then compiled and converted. The final result is an <img> link to the rendered LaTeX, stored on the server.
This works on problem descriptions, too (but not in comments).
Input data should be rigorously formatted. The admins will be very angry if the test data causes inconvenience to users by breaking any of the following rules:
It should consist only of printable ASCII characters and newlines. It should not contain other control characters (not even tabs), nor should it contain any non-ASCII characters, such as letters with accents, or fancy quotation marks, or dashes, or box-drawing characters, or...
In particular, because the PEG Judge runs Linux, text files should be in UNIX format: line-ending sequence must be the line feed character (0x0A, a.k.a., "\n") alone, without any accompanying carriage return character (0x0D, a.k.a. "\r") as in Windows. This is important because the reader should be entitled to the assumption that newlines will be in UNIX format, which may cause their programs to misbehave if certain input methods are used that can actually pick up carriage returns separately from line feeds (such as getchar()), as opposed to, e.g., scanf(), to which the difference is transparent.
If any strings containing spaces are to be given in input, each such string must occur on a separate line; no other input data may be given on the same line. For example, an input line that consists of an integer followed by a space followed by a string that may contain spaces is unacceptable.
If parts of the input data consist of individual characters (e.g., "The next line of input will contain a single character..."), do not allow those characters to be spaces unless there is a really good reason why you need to be able to give space characters in input. If you absolutely must include characters that may be spaces, each such character must occur on a separate line, with no other input data given on that line.
In all other cases, a line must not start with a space, end with a space, or have two or more spaces in a row. It should consist of zero or more items, each of which should be an integer, a floating-point number, a character (that is not a space), or a string (that contains no spaces); items should be separated by single spaces, unless there is some good reason to use a different separator (such as giving the score of a sporting event in a format like "2-0").
A number may start with at most one sign symbol, either "+" or "-".
An integer must not start with zero, unless the integer is 0 itself. It must be given as an unbroken sequence of digits. It may not contain any spaces, periods, or commas, and it may not be expressed in scientific notation.
The integer part of a floating-point number must not start with zero, unless it is 0 itself. It must be given as an unbroken sequence of digits, followed by an optional decimal point (which must be a period, rather than a comma), followed by another unbroken sequence of digits. Commas and spaces must not appear. Scientific notation may not be used. The decimal point is allowed to appear at the end (e.g., "2.") but not at the beginning (e.g., ".2"; use "0.2" instead).
The last datum must be followed by a single newline character; put another way, the last line of the input file must be blank. Furthermore, blank lines, other than the blank line at the very end, are allowed to appear only when they represent the empty string. For example, a test file that consists only of numbers is not allowed to contain any blank lines, except the one at the very end.
Unless there is a very good reason to do so, do not design the input format in such a way that the end of input can be detected only as the end of file. If you must include multiple test cases in one file, specify the number of test cases at the very beginning of the file, so that the program will know when to stop reading.
Do not require output to contain any of the forbidden characters either (ASCII control characters other than the space and newline, or non-ASCII characters), unless there is a very good reason to do so (see Pipe Dream). In the case that you wish to use non-ASCII characters, remember that the PEG Judge uses UTF-8 encoding everywhere.
Retrieved from "https://wcipeg.com/wiki/index.php?title=Judge:Writing&oldid=1656"
This page was last modified on 1 June 2012, at 06:15.
About PEGWiki | CommonCrawl |
\begin{document}
\title{Radio number of trees} \author{Devsi Bantva \\ Department of Mathematics \\ Lukhdhirji Engineering College, Morvi 363 642, Gujarat, India \\ \textit{[email protected]} \\ \\ Samir Vaidya \\ Department of Mathematics \\ Saurashtra University, Rajkot 360 005, Gujarat, India \\ \textit{[email protected]} \\ \\ Sanming Zhou \\ School of Mathematics and Statistics\\ The University of Melbourne, Parkville, VIC 3010, Australia\\ \textit{[email protected]}}
\date{} \openup 0.48\jot \maketitle
\begin{abstract}
A radio labeling of a graph $G$ is a mapping $f: V(G) \rightarrow \{0, 1, 2, \ldots\}$ such that $|f(u)-f(v)|\geq {\rm diam}(G) + 1 - d(u,v)$ for every pair of distinct vertices $u, v$ of $G$, where ${\rm diam}(G)$ is the diameter of $G$ and $d(u,v)$ the distance between $u$ and $v$ in $G$. The radio number of $G$ is the smallest integer $k$ such that $G$ has a radio labeling $f$ with $\max\{f(v) : v \in V(G)\} = k$. We give a necessary and sufficient condition for a lower bound on the radio number of trees to be achieved, two other sufficient conditions for the same bound to be achieved by a tree, and an upper bound on the radio number of trees. Using these, we determine the radio number for three families of trees.
\emph{Keywords}: Channel assignment, radio labeling, radio number, trees
\emph{AMS Subject Classification (2010)}: 05C78, 05C15 \end{abstract}
\section{Introduction} \label{sec:int}
In a graph model for the channel assignment problem, the transmitters are represented by the vertices of a graph; two vertices are adjacent or at distance two apart in the graph if the corresponding transmitters are \emph{very close} or \emph{close} to each other. Motivated by this problem Griggs and Yeh \cite{Griggs} introduced the following distance-two labeling problem: An \emph{$L(2,1)$-labeling} of a graph $G=(V(G),E(G))$ is a function $f$ from the vertex set $V(G)$ to the set of nonnegative integers such that $|f(u)-f(v)|\geq2$ if $d(u,v)=1$ and $|f(u)-f(v)|\geq1$ if $d(u,v)=2$, where $d(u, v)$ is the distance between $u$ and $v$ in $G$. The \emph{span} of $f$ is defined as $\max\{f(u)-f(v): u, v \in V(G)\}$, and the minimum span over all $L(2,1)$-labelings of $G$ is called the \emph{$\lambda$-number} of $G$, denoted by $\lambda(G)$. The $L(2,1)$-labeling and other distance-two labeling problems have been studied by many researchers in the past two decades; see \cite{Calamoneri} and \cite{Yeh1}.
It has been observed that interference among transmitters may go beyond two levels. Motivated by the channel assignment problem for FM radio stations, Chartrand \emph{et al.} \cite{Chartrand1} introduced the following radio labeling problem. Denote by ${\rm diam}(G)$ the \emph{diameter} of $G$, that is, the maximum distance among all pairs of vertices in $G$.
\begin{Definition} {\em A \emph{radio labeling} of a graph $G$ is a mapping $f: V(G) \rightarrow \{0, 1, 2, \ldots\}$ such that for every pair of distinct vertices $u, v$ of $G$, $$
d(u,v) + |f(u)-f(v)| \geq {\rm diam}(G) + 1. $$
The integer $f(u)$ is called the \emph{label} of $u$ under $f$, and the \emph{span} of $f$ is defined as ${\rm span}(f) = \max \{|f(u)-f(v)|: u, v \in V(G)\}$. The \emph{radio number} of $G$ is defined as $$ {\rm rn}(G) := \min_{f} {\rm span}(f) $$ with minimum over all radio labelings $f$ of $G$. A radio labeling $f$ of $G$ is \emph{optimal} if ${\rm span}(f) = {\rm rn}(G)$. } \end{Definition}
Observe that any radio labeling should assign different labels to distinct vertices. Note also that any optimal radio labeling must assign $0$ to some vertex. In the case when ${\rm diam}(G) = 2$ we have ${\rm rn}(G) = \lambda(G)$.
Determining the radio number of a graph is an interesting but challenging problem. So far the radio number is known only for a handful families of graphs (see \cite{Chartrand} for a survey). Chartrand \emph{et al.} \cite{Chartrand1,Chartrand2,Zhang} studied the radio labeling problem for paths and cycles, and this was continued by Liu and Zhu \cite{Liu} who gave the exact value of the radio number for paths and cycles. We emphasise that even in these innocent-looking cases it was challenging to determine the radio number. In \cite{Daphne2,Daphne3}, Liu and Xie discussed the radio number for the square of paths and cycles. In \cite{Vaidya1,Vaidya2,Vaidya3}, Vaidya and Bantva studied the radio number for the total graph of paths, the strong product of $P_{2}$ with $P_{n}$ and linear cacti. In \cite{Benson}, Benson \emph{et al.} determined the radio number of all graphs of order $n$ and diameter $n-2$, where $n \ge 2$ is an integer. Bhatti \emph{et al.} studied \cite{Bhatti} the radio number of wheel-like graphs, while \v{C}ada \emph{et al.} discussed \cite{Cada} a general version of radio labelings of distance graphs. In \cite{Daphne1}, Liu gave a lower bound on the radio number for trees and presented a class of trees achieving this bound. In \cite{Li}, Li \emph{et al.} determined the radio number for complete $m$-ary trees. In \cite{Tuza}, Hal\'asz and Tuza determined the radio number of internally regular complete trees among other things. (A few distance-three labeling problems for such trees of even diameters were studied in \cite{KLZ}.) In spite of these efforts, the problem of determining the exact value of the radio number for trees is still open, and it seems unlikely that a universal formula exists for all trees.
Inspired by the work in \cite{Li}, in this paper we first give a necessary and sufficient condition for a lower bound \cite[Theorem 3]{Daphne1} (see also Lemma \ref{thm:lb}) on the radio number of trees to be achieved (Theorem \ref{thm:ub}), together with an optimal radio labeling. We also give two sufficient conditions for this bound to be achieved (Theorem \ref{thm:cor1}) and obtain an upper bound on the radio number of trees (Theorem \ref{thm:ub1}). These results provide methodologies for obtaining the exact values of or upper bounds on the radio number of trees, and using them we determine in Section \ref{sec:three} the radio number for three families of trees, namely banana trees, firecracker trees, and caterpillars in which all vertices on the spine have the same degree. Our result for caterpillars implies the result in \cite{Liu} for paths. As concluding remarks, in Section \ref{sec:rem} we demonstrate that the results on the radio numbers of internally regular complete trees (\cite[Theorem 1]{Tuza}) and complete $m$-ary trees for $m \ge 3$ (\cite[Theorem 2]{Li}) can be obtained by using our method.
\section{Preliminaries} \label{sec:prep}
We follow \cite{West} for graph-theoretic definition and notation. A \emph{tree} is a connected graph that contains no cycle. In \cite{Daphne1} the \emph{weight} of $T$ from $v \in V(T)$ is defined as $w_{T}(v) = \sum_{u \in V(T)} d(u,v)$ and the \emph{weight} of $T$ as $w(T)$ = min\{$w_{T}(v)$ : $v \in V(T)$\}. A vertex $v \in V(T)$ is a \emph{weight centre} \cite{Daphne1} of $T$ if $w_{T}(v)$ = $w(T)$. Denote by $W(T)$ the set of weight centres of $T$. It was proved in \cite[Lemma 2]{Daphne1} that every tree $T$ has either one or two weight centres, and $T$ has two weight centres, say, $W(T) = \{w, w'\}$, if and only if $w$ and $w'$ are adjacent and $T - ww'$ consists of two equal-sized components. We view $T$ as rooted at its weight centre $W(T)$: if $W(T) = \{w\}$, then $T$ is rooted at $w$; if $W(T) = \{w, w'\}$ (where $w$ and $w'$ are adjacent), then $T$ is rooted at $w$ and $w'$ in the sense that both $w$ and $w'$ are at level $0$. In either case, if in $T$ the unique path from a weight centre to a vertex $v \not \in W(T)$ passes through a vertex $u$ (possibly with $u = v$), then $u$ is called an \emph{ancestor} of $v$, and $v$ is called a \emph{descendent} of $u$. If $v$ is a descendent of $u$ and is adjacent to $u$, then $v$ is a \emph{child} of $u$. Let $u \not \in W(T)$ be adjacent to a weight centre. The subtree induced by $u$ and all its descendent is called a \emph{branch} at $u$. Two branches are called \emph{different} if they are at two vertices adjacent to the same weight centre, and \emph{opposite} if they are at two vertices adjacent to different weight centres. Note that the latter case occurs only when $T$ has two weight centres. Define $$ L(u) := \min\{d(u, x): x \in W(T)\},\; u \in V(T) $$ to indicate the \emph{level} of $u$ in $T$. Define the \emph{total level} of $T$ as $$ L(T) := \sum_{u \in V(T)} L(u). $$ For any $u, v \in V(T)$, define $$ \phi(u,v) := \max\{L(x): \mbox{$x$ is a common ancestor of $u$ and $v$}\} $$ $$ \delta(u,v) := \left\{ \begin{array}{ll} 1, & \mbox{if $W(T) = \{w, w'\}$ and $P_{uv}$ contains the edge $ww'$} \\ [0.2cm] 0, & \mbox{otherwise.} \end{array} \right. $$
\begin{Lemma} \label{obs} Let $T$ be a tree with diameter $d \ge 2$. Then for any $u, v \in V(T)$ the following hold: \begin{enumerate}[\rm (a)]
\item $\phi(u,v) \geq 0$;
\item $\phi(u,v) = 0$ if and only if $u$ and $v$ are in different or opposite branches;
\item $\delta(u,v) = 1$ if and only if $T$ has two weight centres and $u$ and $v$ are in opposite branches;
\item the distance $d(u,v)$ in $T$ between $u$ and $v$ can be expressed as \begin{equation} \label{eq:dist} d(u,v) = L(u) + L(v) - 2 \phi(u,v) + \delta(u,v). \end{equation} \end{enumerate} \end{Lemma}
\section{Radio number of trees} \label{sec:tree}
A radio labeling of $T$ is an injective mapping $f$ from $V(T)$ to the set of nonnegative integers; we can always assume that $f$ assigns $0$ to some vertex. Thus $f$ induces a linear order of the vertices of $T$, namely $V(T) = \{u_{0}, u_{1}, \ldots, u_{p-1}\}$ (where $p=|V(T)|$) defined by $$ 0 = f(u_{0}) < f(u_{1}) < \cdots < f(u_{p-1}) = {\rm span}(f). $$ Define $$ \varepsilon(T) := \left\{ \begin{array}{ll} 1, & \mbox{if $T$ has only one weight centre} \\ [0.3cm] 0, & \mbox{if $T$ has two (adjacent) weight centres.} \end{array} \right. $$
The following result is essentially the same as \cite[Theorem 3]{Daphne1}, because when $T$ has a unique weight centre, say $w$, we have $L(T) = w_{T}(w) = w(T)$, and when $T$ has two weight centres, say $w$ and $w'$, the number of vertices in each of the two components of $T - ww'$ is equal to $p/2$ (\cite[Lemma 2]{Daphne1}) and $L(T) + p/2 = w_{T}(w) = w_{T}(w') = w(T)$. However, we give a proof of Lemma \ref{thm:lb} as it will be used in the proof of Theorem \ref{thm:ub} and subsequent discussion.
\begin{Lemma} \label{thm:lb} (\cite[Theorem 3]{Daphne1}) Let $T$ be a tree with order $p$ and diameter $d \ge 2$. Denote $\varepsilon = \varepsilon(T)$. Then \begin{equation} \label{eq:lb} {\rm rn}(T) \ge (p-1)(d+\varepsilon) - 2 L(T) + \varepsilon. \end{equation} \end{Lemma} \begin{proof}~It suffices to prove that any radio labeling of $T$ has span no less than the right-hand side of (\ref{eq:lb}). Suppose that $f$ is an arbitrary radio labeling of $T$. We order the vertices of $T$ such that $0 = f(u_{0}) < f(u_{1}) < f(u_{2}) < \cdots < f(u_{p-1})$. Since $f$ is a radio labeling, we have $f(u_{i+1}) - f(u_{i}) \geq (d+1)-d(u_{i},u_{i+1})$ for $0 \leq i \leq p-2$. Summing up these $p-1$ inequalities, we obtain \begin{equation} \label{eq:sumup} {\rm span}(f) = f(u_{p-1}) \geq (p-1)(d+1) - \sum_{i = 0}^{p-2} d(u_{i}, u_{i+1}). \end{equation}
\textsf{Case 1}: $T$ has one weight centre.
In this case, we have $\delta(u_{i}, u_{i+1}) = 0$ for $0 \le i \le p-2$ by the definition of the function $\delta$. Since $T$ has only one weight centre, $u_{0}$ and $u_{p-1}$ cannot be the root (weight centre) of $T$ simultaneously. Hence $L(u_{0}) + L(u_{p-1}) \ge 1$. Thus, by (\ref{eq:dist}) and Lemma \ref{obs}(a), \begin{eqnarray*} \sum_{i=0}^{p-2} d(u_{i},u_{i+1}) & = & \sum_{i=0}^{p-2} (L(u_{i}) + L(u_{i+1}) - 2\phi(u_{i}, u_{i+1})) \\ & = & 2 L(T) - L(u_{0}) - L(u_{p-1}) - 2 \sum_{i=0}^{p-2}\phi(u_{i}, u_{i+1}) \\ & \le & 2 L(T) - 1. \end{eqnarray*} This together with (\ref{eq:sumup}) yields ${\rm span}(f) = f(u_{p-1}) \geq (p-1)(d+1) - 2 L(T) + 1$.
\textsf{Case 2}: $T$ has two weight centres.
By Lemma \ref{obs}(a), we have $\phi(u_{i}, u_{i+1}) \ge 0$ for $0 \le i \le p-2$. We also have $\delta(u_{i}, u_{i+1}) \le 1$ for $0 \le i \le p-2$. Since $L(u_{0}) \ge 0$ and $L(u_{p-1}) \ge 0$, by (\ref{eq:dist}) we then have \begin{eqnarray*} \sum_{i=0}^{p-2}d(u_{i},u_{i+1}) & = & 2 L(T) - L(u_{0}) - L(u_{p-1}) - 2 \sum_{i=0}^{p-2}\phi(u_{i}, u_{i+1}) + \sum_{i=0}^{p-2} \delta(u_{i}, u_{i+1}) \\ & \leq & 2L(T) + (p-1). \end{eqnarray*} Combining this with (\ref{eq:sumup}) we obtain ${\rm span}(f) = f(u_{p-1}) \ge (p-1)d - 2L(T)$.
$\Box$
\end{proof}
The next result gives a necessary and sufficient condition for the equality in (\ref{eq:lb}) along with an optimal radio labeling. It will be crucial for our subsequent discussion.
\begin{Theorem} \label{thm:ub} Let $T$ be a tree with order $p$ and diameter $d \ge 2$. Denote $\varepsilon = \varepsilon(T)$. Then \begin{equation} \label{eq:ub} {\rm rn}(T) = (p-1)(d+\varepsilon) - 2 L(T) + \varepsilon \end{equation} holds if and only if there exists a linear order $u_0, u_1, \ldots, u_{p-1}$ of the vertices of $T$ such that \begin{enumerate}[\rm (a)] \item $u_0 = w$ and $u_{p-1} \in N(w)$ when $W(T) = \{w\}$, and $\{u_0, u_{p-1}\} = \{w, w'\}$ when $W(T) = \{w, w'\}$; \item the distance $d(u_{i}, u_{j})$ between $u_i$ and $u_j$ in $T$ satisfies \begin{equation} \label{eq:dij} d(u_{i}, u_{j}) \ge \sum_{t = i}^{j-1} (L(u_t)+L(u_{t+1})) - (j - i)(d+\varepsilon) + (d+1),\;\, 0 \le i < j \le p-1. \end{equation} \end{enumerate} Moreover, under this condition the mapping $f$ defined by \begin{equation} \label{eq:f0} f(u_{0}) = 0 \end{equation} \begin{equation} \label{eq:f} f(u_{i+1}) = f(u_{i}) - L(u_{i+1}) - L(u_{i}) + (d + \varepsilon),\;\, 0 \leq i \leq p-2 \end{equation} is an optimal radio labeling of $T$. \end{Theorem}
We need some preparations in order to prove Theorem \ref{thm:ub}. Given a radio labeling $f$ of a tree $T$, define $$ x_{i} := f(u_{i+1}) - f(u_{i}) + L(u_{i+1}) + L(u_{i}) - (d + \varepsilon),\;\, 0 \leq i \leq p-2, $$
where $p=|V(G)|$, $d={\rm diam}(T)$ and $\varepsilon = \varepsilon(T)$ as before. Obviously, the values of $x_i$'s rely on $f$.
\begin{Lemma} \label{lem:xi} $x_{i} \geq 2 \phi(u_{i},u_{i+1})$ and hence $x_{i} \geq 0$, $0 \le i < p-1$. \end{Lemma}
\begin{proof} By Lemma \ref{obs} and the definition of a radio labeling, we have $x_{i} \geq d + 1 - d(u_{i}, u_{i+1}) + L(u_{i+1}) + L(u_{i}) - (d + \varepsilon)$ = $2 \phi(u_{i}, u_{i+1}) + (1-\varepsilon - \delta(u_{i}, u_{i+1}))$ $\geq 2 \phi(u_{i}, u_{i+1})$.
$\Box$
\end{proof}
\begin{Lemma} \label{lem3} Let $T$ be a tree with order $p$ and diameter $d \ge 2$. Denote $\varepsilon = \varepsilon(T)$. Let $f$ be an injective mapping from $V(T)$ to the set of nonnegative integers, and let $u_{0}, u_{1}, \ldots, u_{p-1}$ be the vertices of $T$ ordered in such a way that $0 = f(u_{0}) < f(u_{1}) < \cdots < f(u_{p-1})$. Then $f$ is a radio labeling of $T$ if and only if for any $0 \leq i < j \leq p-1$, \begin{equation} \label{eq:sumxi} \sum_{t = i}^{j - 1} x_{t} \geq 2 \sum_{t = i+1}^{j - 1} L(u_{t}) + 2 \phi(u_{i},u_{j}) - \delta(u_{i},u_{j}) - (j - i)(d+\varepsilon) + (d+1); \end{equation} that is, \begin{equation} \label{eq:xi} \sum_{t = i}^{j - 1} x_{t} \geq \sum_{t = i}^{j-1} (L(u_t)+L(u_{t+1})) - d(u_{i}, u_{j}) - (j - i)(d+\varepsilon) + (d+1). \end{equation} \end{Lemma}
\begin{proof} We have \begin{eqnarray} \sum_{t = i}^{j - 1} x_{t} & = & \sum_{t = i}^{j - 1} (f(u_{t+1}) - f(u_{t}) + L(u_{t+1}) + L(u_{t}) - (d+\varepsilon)) \nonumber \\ & = & f(u_{j}) - f(u_{i}) + 2\sum_{t = i+1}^{j - 1} L(u_{t}) + L(u_{i}) + L(u_{j}) - (j-i)(d+\varepsilon). \label{eq:sum} \end{eqnarray} Thus, if $f$ is a radio labeling of $T$, then by (\ref{eq:dist}), for any $i, j$ with $0 \leq i < j \leq p-1$, \begin{eqnarray*} \sum_{t = i}^{j - 1} x_{t} & \ge & d + 1 - d(u_{i}, u_{j}) + 2 \sum_{t = i+1}^{j - 1} L(u_{t}) + L(u_{i}) + L(u_{j}) - (j-i)(d+\varepsilon) \\ & = & 2 \sum_{t = i+1}^{j - 1} L(u_{t}) + 2 \phi(u_{i},u_{j}) - \delta(u_{i}, u_{j}) - (j - i)(d+\varepsilon) + (d+1). \end{eqnarray*}
Conversely, if $f$ satisfies (\ref{eq:sumxi}), then by (\ref{eq:sum}) and (\ref{eq:dist}) and the definition of $x_t$, \begin{eqnarray*} f(u_{j}) - f(u_{i}) & = & \sum_{t = i}^{j-1} (f(u_{t+1}) - f(u_{t})) \\ & = & \sum_{t = i}^{j-1} (x_{t} - L(u_{t}) - L(u_{t+1}) + (d+\varepsilon)) \\ & = & \sum_{t = i}^{j-1} x_{t} - 2 \sum_{t = i+1}^{j-1} L(u_t) - L(u_{i}) - L(u_{j}) + (j - i)(d+\varepsilon)\\ & \geq & d + 1 - (L(u_{i}) + L(u_{j}) - 2 \phi(u_{i},u_{j}) + \delta(u_{i}, u_{j})) \\ & = & d + 1 - d(u_{i}, u_{j}) \end{eqnarray*} and hence $f$ is a radio labeling of $T$.
$\Box$
\end{proof}
\begin{proof}\textbf{of Theorem \ref{thm:ub}}~ \textsf{Necessity}:~Suppose that (\ref{eq:ub}) holds. Let $f$ be an optimal labeling of $T$ with the corresponding ordering of vertices given by $0 = f(u_{0}) < f(u_{1}) < f(u_{2}) < \cdots < f(u_{p-1})$. Then ${\rm span}(f) = {\rm rn}(T) = (p-1)(d+\varepsilon) - 2 L(T) + \varepsilon$. Thus from the proof of Lemma \ref{thm:lb} all inequalities there for $f$ must be equalities. More explicitly, we have $f(u_{i+1}) - f(u_{i}) = (d+1)-d(u_{i},u_{i+1})$ for $0 \leq i \leq p-2$, and (i) if $T$ has a unique weight centre, then $L(u_{0}) + L(u_{p-1}) =1$ and $\phi(u_{i}, u_{i+1}) = 0$ for $0 \le i \le p-2$, and (ii) if $T$ has two weight centres, then $L(u_{0}) = L(u_{p-1}) = 0$ (that is, $\{u_0, u_{p-1}\} = \{w, w'\}$) and $\phi(u_{i}, u_{i+1}) = 0$, $\delta(u_{i}, u_{i+1}) = 1$ for $0 \le i \le p-2$. In the former case, we may assume without loss of generality that $L(u_{0}) = 0$ and $L(u_{p-1}) = 1$ (that is, $u_0 = w$ and $u_{p-1}$ is adjacent to $w$), because the mapping ${\rm span}(f) - f$ is also an optimal radio labeling of $T$. In either case, by (\ref{eq:dist}), we have $f(u_{i+1}) - f(u_{i}) = (d+\varepsilon)-L(u_{i+1})-L(u_{i})$, that is, $x_i = 0$, for $0 \leq i \leq p-2$. Since $f$ is a radio labeling, it satisfies (\ref{eq:xi}). So the right-hand side of (\ref{eq:xi}) must be non-positive and (\ref{eq:dij}) follows.
\textsf{Sufficiency}:~Suppose that a linear order $u_0, u_1, \ldots, u_{p-1}$ of the vertices of $T$ satisfies (\ref{eq:dij}), and $f$ is defined by (\ref{eq:f0}) and (\ref{eq:f}). By Lemma \ref{thm:lb} it suffices to prove that $f$ is a radio labeling of $T$ and ${\rm span}(f) = (p-1)(d+\varepsilon) - 2 L(T) + \varepsilon$.
In fact, since $T$ has diameter $d$, we have $L(u_{i}) + L(u_{i+1}) < d + \varepsilon$ for $0 \leq i \leq p-2$. Thus, by (\ref{eq:f0}) and (\ref{eq:f}), we have $0 = f(u_{0}) < f(u_{1}) < \cdots < f(u_{p-1})$. By (\ref{eq:f}), we have $x_i = 0$ for $0 \le i \le p-2$. Thus, for $0 \le i < j \le p-1$, the left-hand side of (\ref{eq:xi}) is equal to $0$. On the other hand, by (\ref{eq:dij}), the right-hand side of the same equation is non-positive. Therefore, $f$ satisfies (\ref{eq:xi}) and so is a radio labeling of $T$. The span of $f$ is given by \begin{eqnarray*} {\rm span}(f) & = & f(u_{p-1}) - f(u_{0}) \\ & = & \sum_{i=0}^{p-2} (f(u_{i+1}) - f(u_{i})) \\ & = & (p-1)(d+\varepsilon) - \sum_{i=0}^{p-2} (L(u_{i+1}) + L(u_{i})) \\ & = & (p-1)(d+\varepsilon) - 2 L(T) + L(u_0) + L(u_{p-1}) \\ & = & (p-1)(d+\varepsilon) - 2 L(T) + \varepsilon. \end{eqnarray*} Therefore, ${\rm rn}(T) \le (p-1)(d+\varepsilon) - 2 L(T) + \varepsilon$. This together with (\ref{eq:lb}) implies (\ref{eq:ub}) and that $f$ is an optimal radio labeling of $T$.
$\Box$
\end{proof}
\begin{Remark} {\em In general, it seems difficult to decide whether a general tree satisfies the conditions in Theorem \ref{thm:ub}, and if it satisfies these conditions how we can find the linear order meeting (a)-(b) in Theorem \ref{thm:ub}. Nevertheless, these may be achieved for some special families of trees such as the ones in the next section.
Consider the following properties:
($A_i$) $u_{i}$ and $u_{i+1}$ are in different branches when $W(T) = \{w\}$ and in opposite branches when $W(T) = \{w, w'\}$;
($B_i$) $L(u_{i}) \leq (d+1)/2$ when $W(T) = \{w\}$, and $L(u_{i}) \leq (d-1)/2$ when $W(T) = \{w, w'\}$;
($C_{ij}$) $\phi(u_{i},u_{j}) \leq (j-i-1)((d+\varepsilon)/2) - \sum_{t=i+1}^{j-1}L(u_{t})-((1-\varepsilon)/2)$ if $u_{i}$ and $u_{j}$ are in the same branch.
It can be verified that any linear order $u_{0}, u_{1}, \ldots, u_{p-1}$ that satisfies (a)-(b) in Theorem \ref{thm:ub} also satisfies ($A_i$) for $0 \le i \le p-2$, ($B_i$) for $0 \leq i \leq p-1$ and ($C_{ij}$) for $1 \leq i < j \leq p-1$. In fact, one can show ($A_i$) by taking $j = i+1$ in \eqref{eq:dij} and using Lemma \ref{obs}. Clearly, ($B_0$) and ($B_{p-1}$) hold. For $1 \le i \le p-2$, by applying (\ref{eq:dij}) to $d(u_{i-1}, u_{i+1})$ and noting $\phi(u_{i-1},u_{i+1}) \geq 0$, we obtain $L(u_i) \le L(u_i) + \phi(u_{i-1},u_{i+1}) \leq (d+1)/2$ if $W(T) = \{w\}$ and $L(u_i) \le L(u_i) + \phi(u_{i-1},u_{i+1}) \leq (d-1)/2$ if $W(T) = \{w, w'\}$, and hence ($B_i$) holds. Using \eqref{eq:dist} and \eqref{eq:dij}, one can show that ($C_{ij}$) holds for $1 \leq i < j \leq p-1$.
Inspired by the properties above, one may try to test whether the vertices of a given tree $T$ can be ordered in such a way that (a) and (b) in Theorem \ref{thm:ub} are satisfied, and to produce such a linear order if it exists, by using the following procedure:
(i) Set $u_0 = w$.
(ii) Choose $u_{p-1} \in N(w)$ if $W(T) = \{w\}$ and $u_{p-1} = w'$ if $W(T) = \{w, w'\}$.
(iii) Choose a vertex $u_1$ (in any branch) other than $u_0$ and $u_{p-1}$ such that property ($B_1$) is respected.
(iv) In general, suppose that $u_0, u_1, \ldots, u_{t}$ have been put in order such that ($A_i$) holds for $0 \le i \le t-1$, ($B_i$) holds for $0 \leq i \leq t$, and ($C_{ij}$) holds for $1 \leq i < j \leq t$. If $t = p-2$, stop and output the linear order $u_0, u_1, \ldots, u_{p-1}$. If $t < p-2$, choose $u_{t+1}$ from $V(T) \setminus \{u_0, u_1, \ldots, u_{t}, u_{p-1}\}$ such that ($A_t$), ($B_{t+1}$) and ($C_{i, t+1}$), $1 \leq i < t+1$ are respected, and continue the process with the longer sequence $u_0, u_1, \ldots, u_{t}, u_{t+1}$, if such a vertex $u_{t+1}$ exists. (It can be verified that ($C_{i, t+1}$) holds if $L(u_{t+1})+L(u_{i}) < \sum_{k=i+1}^{t}(d+\varepsilon-2L(u_{k}))+\varepsilon$. So it suffices to ensure that ($C_{i, t+1}$) is respected for those $i$ such that $L(u_{t+1})+L(u_{i}) \ge \sum_{k=i+1}^{t}(d+\varepsilon-2L(u_{k}))+\varepsilon$.) If no such a vertex $u_{t+1}$ exists, one may try to choose a different $u_i$ for some $i \le t$ and run the procedure for the sequence $u_0, u_1, \ldots, u_{i}$.
It can be proved that, if we terminate with $t=p-2$, then the linear order $u_0, u_1, \ldots, u_{p-1}$ produced above satisfies (a) and (b) in Theorem \ref{thm:ub} and therefore the radio number of $T$ is given by \eqref{eq:ub}.
Obviously, the procedure above is not an algorithm, but it can be easily modified to give an enumerative algorithm by considering all possible choices for $u_{t+1}$ in each iteration. Though this algorithm is likely to be exponential in general (as it requires enumeration of a large number of possibilities), it may be efficient for some families of trees with special structures or small orders. } \end{Remark}
We now present two sets of sufficient conditions for \eqref{eq:ub} to hold. These conditions are easier to verify than \eqref{eq:dij} in some cases and will be used in the next section.
\begin{Theorem} \label{thm:cor1} Let $T$ be a tree with order $p$ and diameter $d \ge 2$. Denote $\varepsilon = \varepsilon(T)$. Suppose that there exists a linear order $u_0, u_1, \ldots, u_{p-1}$ of the vertices of $T$ such that \begin{enumerate}[\rm (a)] \item $u_0 = w$ and $u_{p-1} \in N(w)$ when $W(T) = \{w\}$, and $\{u_0, u_{p-1}\} = \{w, w'\}$ when $W(T) = \{w, w'\}$; \item $u_{i}$ and $u_{i+1}$ are in different branches if $W(T) = \{w\}$ and in opposite branches if $W(T) = \{w, w'\}$, $0 \le i \le p-2$; \end{enumerate} and one of the following holds: \begin{enumerate}[\rm (c)] \item $\min\{d(u_{i},u_{i+1}),d(u_{i+1},u_{i+2})\} \leq (d+1-\varepsilon)/2,\; 0 \leq i \leq p-3$; \item[\rm (d)] $d(u_{i},u_{i+1}) \leq (d+1+\varepsilon)/2$, $0 \leq i \leq p-2$. \end{enumerate} Then ${\rm rn}(T)$ is given by \eqref{eq:ub} and $f$ defined in \eqref{eq:f0}-\eqref{eq:f} is an optimal radio labeling of $T$. \end{Theorem}
\begin{proof} It suffices to prove that the linear order $u_0, u_1, \ldots, u_{p-1}$ satisfies (\ref{eq:dij}). Denote by $S_{i,j}$ the right-hand side of (\ref{eq:dij}) with respect to this order. In view of \eqref{eq:dist}, we may assume $j-i \geq 2$.
\textsf{Case 1:} $W(T) = \{w\}$.
Suppose (a), (b) and (c) hold. If $j \geq i+4$, then $\min \{L(u_{t})+L(u_{t+1}): i \le t \le j-1\} \leq d/2$ and $\max \{L(u_{t})+L(u_{t+1}): i \le t \le j-1\} = d$ as $\min\{d(u_{i},u_{i+1}), d(u_{i+1},u_{i+2})\} \leq d/2$. Hence $S_{i,j} \leq \((j-i)/2\)\(d/2+d\)-3(d+1) \leq 2\(3d/2\)-3d-3 = -3 < d(u_{i},u_{j})$. If $j = i+3$, then either (i) $d(u_{i},u_{i+1}) \leq d/2$, $d(u_{i+1},u_{i+2}) > d/2$ and $d(u_{i+2},u_{i+3}) \leq d/2$, or (ii) $d(u_{i},u_{i+1}) > d/2$, $d(u_{i+1},u_{i+2}) \leq d/2$ and $d(u_{i+2},u_{i+3}) > d/2$. In case (i), we have $L(u_{i})+L(u_{i+1}) \leq d/2$, $d/2 < L(u_{i+1})+L(u_{i+2}) \leq d$ and $L(u_{i+2})+L(u_{i+3}) \leq d/2$. Hence $S_{i,j} \leq \(d/2+d+d/2\)-2(d+1) = -2 < d(u_{i},u_{j})$. In case (ii), we have $d/2 < L(u_{i})+L(u_{i+1}) < d$, $L(u_{i+1})+L(u_{i+2}) \leq d/2$, $d/2 < L(u_{i+2})+L(u_{i+3}) \leq d$, and $d(u_{i},u_{i+3}) \geq d/2$ as $u_{i}$ and $u_{i+3}$ are in different branches. Hence $S_{i,j} \leq \(d+d/2+d\)-2(d+1) = d/2-2 < d(u_{i},u_{j})$. If $j = i+2$, then either (i) $d(u_{i},u_{i+1}) \leq d/2$ and $d(u_{i+1},u_{i+2}) \leq d/2$, or (ii) $d(u_{i},u_{i+1}) \leq d/2$ and $d/2 < d(u_{i+1},u_{i+2}) \leq d$. In case (i), we have $L(u_{i})+L(u_{i+1}) \leq d/2$ and $L(u_{i+1})+L(u_{i+2}) \leq d/2$, and hence $S_{i,j} \leq \(d/2+d/2\)-(d+1) = -1 < d(u_{i},u_{j})$. In case (ii), we have $L(u_{i})+L(u_{i+1}) \leq d/2$ and $L(u_{i+1})+L(u_{i+2}) = d(u_{i+1},u_{i+2})$, and hence $S_{i,j} \leq \(d/2+d(u_{i+1},u_{i+2})\)-(d+1) = d(u_{i+1},u_{i+2})-\(d/2-1\) < d(u_{i},u_{j})$.
Suppose (a), (b) and (d) hold. Then, for $0 \leq i \leq p-2$, $d(u_{i},u_{i+1}) \leq (d+1+\varepsilon)/2$ and $L(u_{i})+L(u_{i+2}) \leq (d+2)/2$ as $u_{i}$ and $u_{i+1}$ are in different branches. Hence $S_{i,j} \leq \sum_{t=i}^{j-1}\left((d+2)/2\right)-(j-i)(d+1)+(d+1) = (d+1)-(j-i)\left(d/2\right) \leq 1 \leq d(u_{i},u_{i+1})$ as $j-i \geq 2$.
\textsf{Case 2:} $W(T) = \{w, w'\}$.
Suppose (a), (b) and (c) hold. If $j \geq i+4$, then $\min\{L(u_{t})+L(u_{t+1}): i \le t \le j-1\} \leq (d-1)/2$ and $\max\{L(u_{t})+L(u_{t+1}): i \le t \le j-1 \} = d-1$ as $\min\{d(u_{i},u_{i+1}),d(u_{i+1},u_{i+2})\} \leq (d+1/2)$. Hence $S_{i,j} \leq \((j-i)/2\)\((d-1)/2+d-1\)-4d+(d+1) \leq 2\(3(d-1)/2\)-3d+1 = -2 < d(u_{i},u_{j})$. If $j = i+3$, then either (i) $d(u_{i},u_{i+1}) \leq (d+1)/2$, $d/2 < d(u_{i+1},u_{i+2}) \leq d$ and $d(u_{i+2},u_{i+3}) \leq (d+1)/2$, or (ii) $(d+1)/2 < d(u_{i},u_{i+1}) \leq d$, $d(u_{i+1},u_{i+2}) \leq d/2$ and $(d+1)/2 < d(u_{i+2},u_{i+3}) \leq d$. In case (i), $L(u_{i})+L(u_{i+1}) \leq (d-1)/2$, $(d-1)/2 < L(u_{i+1})+L(u_{i+2}) \leq d-1$ and $L(u_{i+2})+L(u_{i+3}) \leq (d-1)/2$. Hence $S_{i,j} \leq \((d-1)/2+d-1+(d-1)/2\)-3d+(d+1) \leq \(2d-2\)-2d+1 = -1 < d(u_{i},u_{j})$. In case (ii), $(d+1)/2 < d(u_{i},u_{i+1}) \leq d$, $d(u_{i+1},u_{i+2}) \leq (d+1)/2$ and $(d+1)/2 < d(u_{i+2},u_{i+3}) \leq d$. Hence $S_{i,j} \leq \(d-1+(d-1)/2+d-1\)-3d+(d+1) \leq \(5(d-1)/2\)-2d+1 = (d-3/2) < d(u_{i},u_{j})$ as $u_{i}$ and $u_{j}$ are in opposite branches. If $j = i+2$, then either (i) $d(u_{i},u_{i+1}) \leq (d+1)/2$ and $d(u_{i+1},u_{i+2}) \leq (d+1)/2$, or (ii) $d(u_{i},u_{i+1}) \leq (d+1)/2$ and $d(u_{i+1},u_{i+2}) > (d+1)/2$. In the former case, we have $L(u_{i})+L(u_{i+1}) \leq (d-1)/2$ and $L(u_{i+1})+L(u_{i+2}) \leq (d-1)/2$, and hence $S_{i,j} \leq \((d-1)/2+(d-1)/2\)-2d+(d+1) \leq \(d-1\)-d+1 = 0 < d(u_{i},u_{j})$. In the latter case, we have $L(u_{i})+L(u_{i+1}) \leq (d-1)/2$ and $(d-1)/2 < L(u_{i+1})+L(u_{i+2}) \leq d-1$, and hence $S_{i,j} \leq \((d-1)/2 + d(u_{i+1},u_{i+2}) - 1\)-2d+(d+1) \leq d(u_{i+1},u_{i+2}) - \((d+1)/2\) < d(u_{i},u_{j})$.
Suppose (a), (b) and (d) hold. Then, for $0 \leq i \leq p-2$, $d(u_{i},u_{i+1}) \leq (d+1+\varepsilon)/2$ and $L(u_{i})+L(u_{i+1}) \leq (d-1)/2$ as $u_{i}$ and $u_{i+1}$ are in opposite branches. Hence $S_{i,j} \leq \sum_{t=i}^{j-1}\left((d-1)/2\right)-(j-i) d + (d+1) = (d+1)-(j-i)\left((d+1)/2\right) \leq 0 < d(u_{i},u_{i+1})$ as $j-i \geq 2$.
$\Box$
\end{proof}
The proof of Theorems \ref{thm:ub} and \ref{thm:cor1} implies the following result which will be used in the next section.
\begin{Theorem} \label{thm:ub1} Let $T$ be a tree with order $p$ and diameter $d \ge 2$. Denote $\varepsilon = \varepsilon(T)$. Then, for any linear order $u_0, u_1, \ldots, u_{p-1}$ of the vertices of $T$ satisfying (\ref{eq:dij}), or (b) and one of (c) and (d) in Theorem \ref{thm:cor1}, the mapping $f$ given by (\ref{eq:f0}) and (\ref{eq:f}) is a radio labeling of $T$. Moreover, if in addition $L(u_0) + L(u_{p-1}) = k + 1$ when $W(T) = \{w\}$ and $L(u_0) + L(u_{p-1}) = k$ when $W(T) = \{w, w'\}$, then \begin{equation} \label{eq:ub1} {\rm rn}(T) \le (p-1)(d+\varepsilon) - 2 L(T) + \varepsilon + k \end{equation} and {\rm span}(f) is equal to this upper bound. \end{Theorem}
\section{Radio number for three families of trees} \label{sec:three}
In this section we use Theorems \ref{thm:ub}, \ref{thm:cor1} and \ref{thm:ub1} to determine the radio number for three families of trees. We continue to use the terminology and notation in the previous section.
\subsection{Banana trees}
A $k$-star is a tree consisting of $k$ leaves and another vertex joined to all leaves by edges. We define the \emph{$(n,k)$-banana tree}, denoted by $B(n,k)$, to be the tree obtained by joining one leaf of each of $n$ copies of a $(k-1)$-star to a single root (which is distinct from all vertices in the $k$-stars). See Fig. \ref{fig1} for an illustration. It is clear that $B(n,k)$ has diameter 6 and exactly one weight centre if $n \ge 2$.
\begin{Theorem} \label{thm:banana} Let $n \geq 5$ and $k \geq 4$ be integers. Then $$ {\rm rn}(B(n,k)) = n(k+6)+1. $$ \end{Theorem}
\begin{proof} The order and total level of $B(n,k)$ are given by $p = nk + 1$ and $L(B(n,k)) = 3n(k-1)$ respectively. Plugging these into (\ref{eq:lb}), we obtain ${\rm rn}(B(n,k)) \ge n(k+6)+1$. We now prove that this lower bound is tight by giving a linear order of the vertices of $B(n,k)$ satisfying (\ref{eq:dij}).
Let $w^{i}_{1}, w^{i}_{2}, \ldots, w^{i}_{k}$ denote the vertices of the $i^{th}$ copy of the $(k-1)$-star in $B(n,k)$, where $w^{i}_{1}$ is the apex vertex (centre) and $w^{i}_{2}, \ldots, w^{i}_{k}$ are the leaves. Without loss of generality we assume that $w^{1}_{k}, w^{2}_{k}, \ldots, w^{n}_{k}$ are joined by edges to a common vertex $w$, which is the unique weight centre of $B(n,k)$.
We give a linear order $u_{0}, u_{1}, u_{2}, \ldots, u_{p-1}$ of the vertices of $B(n,k)$ as follows. We first set $u_{0}$ = $w$. Next, for $1 \leq t \leq p-1$, let \begin{center} $u_{t}$ := $w^{i}_{j}$, where $t$ = $(j-1)n + i$, $1 \leq i \leq n$, $1 \leq j \leq k$. \end{center} Note that $u_{p-1}$ = $w_{k}^{n}$ is adjacent to $w$ and for $1 \leq i \leq p-2$, $u_{i}$ and $u_{i+1}$ are in different branches so that $\phi(u_{i},u_{i+1})$ = 0.
\textsf{Claim:} The linear order $u_{0}, u_{1}, \ldots, u_{p-1}$ above satisfies (\ref{eq:dij}).
To prove this consider any two vertices $u_{i}, u_{j}$ of $B(n,k)$ with $0 \leq i < j \leq p-1$. Since the diameter of $B(n,k)$ is $6$, the right-hand side of (\ref{eq:dij}) is given by $S_{i,j} := \sum_{t=i}^{j-1}(L(u_{t})+L(u_{t+1})-7)+7$. It is easy to verify that (\ref{eq:dij}) holds when $i=0$. We assume $i \ge 1$ in the sequel.
\textsf{Case 1:} $1 \leq i < j \leq n$.~~We have $d(u_{i},u_{j}) = 4$ and $L(u_{t}) = 2$ for $i \leq t \leq j$. Hence $S_{i,j} = 7-3(j-i) \leq 4 = d(u_{i},u_{j})$.
\textsf{Case 2:} $i \leq n < j$.~~We have $L(u_{i}) = 2$ and $L(u_{t}) \leq 3$ for $i < t \leq j$ and hence $S_{i,j} \leq 5-(j-i-1)$. If $j-i < n$, then $S_{i,j} \leq 5 = d(u_{i},u_{j})$. If $j-i \geq n$, then $S_{i,j} \leq 1 \le d(u_{i},u_{j})$.
\textsf{Case 3:} $n < i < j \leq p-n-1$.~~We have $L(u_{t})$ = 3 for $i \leq t \leq j$ and hence $S_{i,j}$ = $7-(j-i)$. If $j-i < n$, then $S_{i,j} \leq 6 = d(u_{i},u_{j})$. If $j-i \geq n$, then $S_{i,j} \leq 2 \le d(u_{i},u_{j})$.
\textsf{Case 4:} $i \leq p-n-1 < j$.~~We have $L(u_{t}) \leq 3$ for $i \leq t \leq j-1$ and $L(u_{j}) = 1$, and so $S_{i,j} \leq 4-(j-i-1)$. If $j-i < n$, then $S_{i,j} \leq 4 = d(u_{i},u_{j})$. If $j-i \geq n$, then $S_{i,j} \leq 0 < 2 \le d(u_{i},u_{j})$.
\textsf{Case 5:} $p-n-1 < i < j \leq p-1$.~~We have $L(u_{t}) = 1$ for $i \leq t \leq j$. Hence $S_{i,j} = 7-5(j-i) \leq 2 = d(u_{i},u_{j})$.
So far we have proved the claim above. Therefore, by Theorem \ref{thm:ub}, we have ${\rm rn}(B(n,k)) = n(k+6)+1$ and moreover the labeling given by (\ref{eq:f0})-(\ref{eq:f}) (applied to the current case) is an optimal radio labeling of $B(n,k)$.
$\Box$
\end{proof}
The reader is referred to Fig. \ref{fig1} for an illustration of naming, ordering and labeling of the vertices of $B(5,4)$ by using the procedure in the proof of Theorem \ref{thm:banana}. \begin{figure}
\caption{\small An optimal radio labeling of $B(5,4)$ together with the corresponding ordering of vertices.}
\label{fig1}
\end{figure}
\subsection{Firecracker trees}
We define the \emph{$(n,k)$-firecracker tree}, denoted by $F(n,k)$, to be the tree obtained by taking $n$ copies of a $(k-1)$-star and identifying a leaf of each of them to a different vertex of a path of length $n-1$ (see Fig. \ref{fig2A}-\ref{fig2B}). It is clear that $F(n,k)$ has one or two weight centres depending on whether $n$ is odd or even.
\begin{Theorem} \label{thm:fire} Let $n, k \geq 3$ be integers. Denote $\varepsilon = \varepsilon(F(n,k))$, which is $1$ if $n$ is odd and $0$ if $n$ is even. Then \begin{equation} \label{fire:rn} {\rm rn}(F(n,k)) = \frac{(n^{2}+\varepsilon)k}{2}+5n-3. \end{equation} \end{Theorem}
\begin{proof} $F(n,k)$ has order $p = nk$, diameter $d = n+3$ and total level $$ L(F(n,k)) = \left\{ \begin{array}{ll} \frac{1}{4}\left(kn^{2}+(8k-12)n-k\right), & \mbox{if $n$ is odd} \\ [0.3cm] \frac{1}{4}\left(kn^{2}+6n(k-2)\right), & \mbox{if $n$ is even}. \end{array} \right. $$ Plugging these into (\ref{eq:lb}), we obtain that the right-hand side of (\ref{fire:rn}) is a lower bound for ${\rm rn}(F(n,k))$. In what follows we prove that this lower bound is tight by giving a linear order $u_{0}, u_1, u_{2}, \ldots, u_{p-1}$ of the vertices of $F(n,k)$ satisfying (\ref{eq:dij}).
Let $w^{i}_{1}, w^{i}_{2}, \ldots, w^{i}_{k}$ denote the vertices of the $i^{th}$ copy of the $(k-1)$-star in $F(n,k)$, where $w^{i}_{1}$ is the apex vertex (centre) and $w^{i}_{2}, \ldots, w^{i}_{k}$ are the leaves. Without loss of generality we assume that $w^{1}_{k}, w^{2}_{k}, \ldots, w^{n}_{k}$ are identified to the vertices in the path of length $n-1$ in the definition of $F(n,k)$.
\textsf{Case 1}: $n$ is odd.~~In this case, $F(n,k)$ has only one weight centre, namely $w = w_{k}^{(n+1)/2}$. Set $u_{0} = w$. For $1 \leq t \leq p-n$, let \begin{equation*} u_{t} := w^{i}_{j}, \mbox{ where } t = \left\{ \begin{array}{ll} jn, & \mbox{if } i = (n+1)/2\\ [0.3cm] (j-1)n + 2i-1, & \mbox{if } i < (n+1)/2 \\ [0.3cm] (j-1)n + 2\left(i - \frac{n+1}{2}\right), & \mbox{if } i > (n+1)/2. \end{array} \right. \end{equation*} For $p-n+1 \leq t \leq p-1$, let \begin{equation*} u_{t} := w^{i}_{j}, \mbox{ where } t = \left\{ \begin{array}{ll} (j-1)n - 2\left(i - \frac{n-1}{2}\right)+1, & \mbox{if } i < (n+1)/2 \\ [0.3cm] (j-1)n + 2(n-i+1), & \mbox{if } i > (n+1)/2. \end{array} \right. \end{equation*} Note that $u_{p-1} = w_{k}^{(n+3)/2}$ is adjacent to $w$ and for $1 \leq i \leq p-2$, $u_{i}$ and $u_{i+1}$ are in different branches so that $\phi(u_{i},u_{i+1})$ = 0.
\textsf{Claim 1:} The linear order $u_{0}, u_{1}, \ldots, u_{p-1}$ above satisfies (\ref{eq:dij}).
In fact, for any two vertices $u_{i}, u_{j}$ with $0 \leq i < j \leq p-1$, the right-hand side of (\ref{eq:dij}) is $S_{i,j} := \sum_{t=i}^{j-1} \left(L(u_{t})+L(u_{t+1})-(d+1)\right)+(d+1)$, where $d=n+3 \ge 6$ is the diameter of $F(n,k)$. If $j$ = $i+1$ or $i = 0$, then it is straightforward to verify that (\ref{eq:dij}) is satisfied. Note that, if $j \geq i+3$, then for at least one $t$, $L(u_{t})+L(u_{t+1}) \leq (d+4)/2$ and $L(u_{t})+L(u_{t+1}) \leq (d+6)/2$ for all $i \le t \le j-1$; hence $S_{i,j} \le (j-i-1) ((d+6)/2 - (d+1)) + (d+4)/2 \le 2 ((d+6)/2 - (d+1)) + (d+4)/2 = (-d+12)/2 \le d(u_{i},u_{j})$ as $d \ge 6$. Thus (\ref{eq:dij}) is satisfied when $j \geq i+3$. It remains to consider the case $j = i+2 \ge 3$. In this case we have $S_{i,j} = L(u_{i})+2L(u_{i+1})+L(u_{i+2})-(d+1)$ and one can verify the following: If $1 \leq i \leq n-2$, then $S_{i,j} = 0 < d(u_{i},u_{j}) = 3$; if $n-1 \le i \leq n$, then $S_{i,j} \leq 3 < d(u_{i},u_{j})$; if $n+1 \le i \leq p-n-2$, then $S_{i,j} \leq 4 < d(u_{i},u_{j})$; if $i = p-n-1$, then $S_{i,j} = (-d+8)/2 \le 1 < d(u_{i},u_{j})$; if $i = p-n$, then $S_{i,j} = (-d+2)/2 < 0 < d(u_{i},u_{j})$; if $p-n+1 \le i \leq p-3$, then $S_{i,j} = -2 < d(u_{i},u_{j})$. This completes the proof of the claim.
\textsf{Case 2}: $n$ is even.~~In this case, $F(n,k)$ has two weight centres, namely $w = w_{k}^{n/2}$ and $w' = w_{k}^{n/2+1}$. We set $u_{0} = w'$ and $u_{p-1} = w$. For $1 \leq t \leq p-n+1$, let \begin{equation*} u_{t} := w^{i}_{j}, \mbox{ where } t = \left\{ \begin{array}{ll} (j-1)n + 2i - 1, & \mbox{if } i \leq n/2 \\ [0.3cm] (j-1)n + 2\left(i-\frac{n}{2}\right), & \mbox{if } i > n/2. \end{array} \right. \end{equation*} For $p-n+2 \leq t \leq p-2$, let \begin{equation*} u_{t} := w^{i}_{j}, \mbox{ where } t = \left\{ \begin{array}{ll} (j-1)n + 2i - 1, & \mbox{if } i < n/2 \\ [0.3cm] (j-1)n + 2\left(i-1-\frac{n}{2}\right), & \mbox{if } i > n/2 + 1. \end{array} \right. \end{equation*}
Note that $u_{p-1} = w_{k}^{n/2}$ is adjacent to $w'$ and for $1 \leq i \leq p-2$, $u_{i}$ and $u_{i+1}$ are in opposite branches so that $\phi(u_{i},u_{i+1})$ = 0 and $\delta(u_{i},u_{i+1})$ = 1.
\textsf{Claim 2:} The linear order $u_{0}, u_{1}, \ldots, u_{p-1}$ above satisfies (\ref{eq:dij}).
In fact, for any two vertices $u_{i}, u_{j}$ with $0 \leq i < j \leq p-1$, the right-hand side of (\ref{eq:dij}) is $S_{i,j} := \sum_{t=i}^{j-1} \left(L(u_{t})+L(u_{t+1})-d\right)+(d+1)$, where $d=n+3 \ge 6$. If $j$ = $i+1$ or $i = 0$, then it is easy to verify that (\ref{eq:dij}) is satisfied. Note that, if $j \geq i+3$, then $L(u_{t})+L(u_{t+1}) \leq (d+3)/2$ for $i \le t \le j-1$ and hence $S_{i,j} \le (j-i) ((d+3)/2 - d) + (d+1) \le 3 ((d+3)/2 - d) + (d+1) = -(d+11)/2 < d(u_{i},u_{j})$. It remains to consider the case $j = i+2 \ge 3$. In this case $S_{i,j} = L(u_{i})+2L(u_{i+1})+L(u_{i+2})-d+1$ and the following hold: If $1 \leq i \leq n-2$, then $S_{i,j} = -1 < 3 = d(u_{i},u_{j})$; if $n-1 \le i \leq n$, then $S_{i,j} \leq (3d-1)/2 -d+1 = (d+1)/2 \le d(u_{i},u_{j})$; if $n+1 \le i \leq p-n-2$, then $S_{i,j} = 3 < 5 \le d(u_{i},u_{j})$; if $i = p-n-1$, then $S_{i,j} = (d-1)/2 = d(u_{i},u_{j})$; if $i = p-n$, then $S_{i,j} = (d-7)/2 < (d-3)/2 = d(u_{i},u_{j})$; if $p-n+1 \le i \leq p-3$, then $S_{i,j} = -3 < d(u_{i},u_{j})$. This completes the proof of Claim 2.
In summary, in each case above we have defined a linear order $u_{0}, u_{1}, \ldots, u_{p-1}$ of the vertices of $F(n,k)$ which satisfies (\ref{eq:dij}). Therefore, by Theorem \ref{thm:ub}, ${\rm rn}(F(n,k))$ is given by (\ref{eq:ub}) which is exactly the right-hand side of (\ref{fire:rn}) in the case of firecracker trees. The labeling given by (\ref{eq:f0})-(\ref{eq:f}) (applied to the current situation) is an optimal radio labeling of $F(n,k)$.
$\Box$
\end{proof}
Fig. \ref{fig2A} and \ref{fig2B} give illustrations of naming, ordering and labeling of the vertices of $F(5,4)$ and $F(6,4)$, respectively, by using the procedure in the proof of Theorem \ref{thm:fire}.
\begin{figure}
\caption{\small An optimal radio labeling of $F(5,4)$ together with the corresponding ordering of vertices.}
\label{fig2A}
\end{figure}
\begin{figure}
\caption{\small An optimal radio labeling of $F(6,4)$ together with the corresponding ordering of vertices.}
\label{fig2B}
\end{figure}
\subsection{Caterpillars}
A tree is called a \emph{caterpillar} if the removal of all its degree-one vertices results in a path, called the \emph{spine}. Denote by $C(n, k)$ the caterpillar in which the spine has length $n-3$ and all vertices on the spine have degree $k$, where $n \ge 3$ and $k \ge 2$. Note that $\varepsilon(C(n,k)) = 1$ when $n$ is odd and $\varepsilon(C(n,k)) = 0$ when $n$ is even. Note also that $C(n, 2) = P_n$ is the path with $n$ vertices.
\begin{Theorem} \label{thm:cater} Let $n \geq 4$ and $k \ge 2$ be integers. Denote $\varepsilon = \varepsilon(C(n,k))$. Then \begin{equation} \label{rn:cater} {\rm rn}(C(n,k)) = \frac{1}{2}\left((n-2)^{2}+\varepsilon\right)(k-1)+n-1+\varepsilon. \end{equation} \end{Theorem}
In the special case when $k = 2$, Theorem \ref{thm:cater} gives the following known result.
\begin{Corollary} \label{coro:cater} (\cite{Liu}) Let $n \geq 4$ be an integer. Then \begin{equation}\label{rn:path} {\rm rn}(P_{n}) = \left\{
\begin{array}{ll}
2m^{2}+2, & \hbox{if }n=2m+1 \\[0.2cm]
2m(m-1)+1, & \hbox{if }n=2m.
\end{array} \right. \end{equation} \end{Corollary}
The radio number of $C(n, k)$ for even $n$ was considered in \cite{KP}. However, the formula in \cite[Theorem 2.3]{KP} seems incorrect -- it is bigger by one than the actual value of ${\rm rn}(C(n, k))$ shown in \eqref{rn:cater}. (We have taken into account that the radio number defined in \cite{KP} is bigger than the usual definition by one.) For example, ${\rm rn}(C(10, 4) = 105$ as shown in Fig. \ref{Cater2}, while \cite[Theorem 2.3]{KP} gives $106$.
\begin{proof}{\bf of Theorem \ref{thm:cater}}~ $C(n,k)$ has order $p = n+(n-2)(k-1)$, diameter $d = n-1$ and total level $$ L(C(n,k)) = \left\{
\begin{array}{ll}
\frac{(n^{2}-5)(k-1)}{4}+1, & \hbox{if }n=2m+1 \\[0.2cm]
\frac{n(n-2)(k-1)}{4}, & \hbox{if }n=2m.
\end{array} \right. $$ Plugging these into (\ref{eq:lb}), we obtain \begin{equation} \label{eq:clb} {\rm rn}(C(n,k)) \geq \frac{1}{2}\left((n-2)^{2}+\varepsilon\right)(k-1)+n-1. \end{equation} Denote by $v_{2} \ldots v_{n-1}$ the spine of $C(n,k)$. Choose $v_1$ and $v_n$ to be distinct degree-one vertices of $C(n,k)$ adjacent to $v_2$ and $v_{n-1}$, respectively. For each $2 \leq i \leq n-1$, denote by $v_{i,j}$, $1 \leq j \leq k-2$ the neighbours of $v_{i}$ not on the path $v_{1} v_{2} \ldots v_{n-1} v_n$.
\textsf{Case 1:} $n = 2m+1$ is odd.
In this case, $C(n,k)$ has only one weight centre, namely $v_{m+1}$, and so $\varepsilon = 1$. We first prove \begin{equation} \label{eq:cub} {\rm rn}(C(n,k)) \le \frac{1}{2}\left((n-2)^{2}+1\right)(k-1)+n \end{equation} by using Theorem \ref{thm:ub1}. To this end we define a linear order $u_0, u_1, \ldots, u_{p-1}$ of the vertices of $C(n, k)$ as follows. Set $u_{0}$ = $v_{m}$, $u_{1}$ = $v_{2m}$, $u_{2}$ = $v_{1}$, $u_{3}$ = $v_{m+1}$, $u_{4}$ = $v_{2m+1}$ and $u_{p-1}$ = $v_{m+2}$. Relabel the remaining vertices on the spine by setting \begin{equation}\label{ord1} u_{t} = v_{i}, \mbox{ where } t = \left\{
\begin{array}{ll}
2(m-i)+3, & \hbox{if } 2 \le i \le m-1 \\[0.2cm]
2(2m-i+2), & \hbox{if } m+3 \le i < 2m.
\end{array} \right. \end{equation} We obtain $u_{5}, \ldots, u_{n-2}$ in this way. Set \begin{equation*} u_{t} = v_{i,j}, \mbox{ where } t = \left\{
\begin{array}{ll}
2m+2(k-2)(i-3)+2j-1, & \hbox{if }3 \le i \le m,\ 1 \le j \le k-2 \\[0.2cm]
2m+2(k-2)(i-m-2)+2(j-1), & \hbox{if }m+2 \le i \le 2m-1,\ 1 \le j \le k-2
\end{array} \right. \end{equation*} to obtain $u_{n-1}, \ldots, u_{p-3k+4}$. Finally, set \begin{equation*} u_{t} = v_{i,j}, \mbox{ where } t = \left\{
\begin{array}{ll}
2m+2(k-2)(m-2)+3j-1, & \hbox{if }i = 2,\ 1 \le j \le k-2 \\[0.2cm]
2m+2(k-2)(m-2)+3j-2, & \hbox{if }i = m+1,\ 1 \le j \le k-2 \\[0.2cm]
2m+2(k-2)(m-2)+3(j-1), & \hbox{if }i = 2m,\ 1 \le j \le k-2
\end{array} \right. \end{equation*} to obtain $u_{p-3k+5}, \ldots, u_{p-1}$.
\textsf{Claim 1:} The linear order $u_{0}, u_{1}, \ldots, u_{p-1}$ above satisfies condition (\ref{eq:dij}).
In fact, denoting the right-hand side of (\ref{eq:dij}) with respect to the order above by $S_{i,j}$, we have $S_{i,j}$ = $\sum_{t=i}^{j-1}(L(u_{t})+L(u_{t+1})-2m-1)+2m+1$. It is easy to verify that (\ref{eq:dij}) is satisfied if $j = i+1$, or $i = 0$, or $i = p-1$. If $j \geq i+3$, then for $1 \le i < j \le n-2$, $u_{t}$ with $i \le t \le j-2$ satisfies (b) and (c) in Theorem \ref{thm:cor1} and hence (\ref{eq:dij}) is satisfied; for $i \le n-2 < j$ or $n-2 \le i < j \le p-3k+4$, we have $L(u_{t})+L(u_{t+1}) \leq m+2$ for $i \leq t \leq j-1$ and hence $S_{i,j} \leq (j-i)(m+2-(2m+1))+(2m+1) \leq 3(-m+1)+(2m+1) = -m+4 \leq 3 \leq d(u_{i},u_{j})$; for $i \le p-3k+4 < j \le p-2$ or $p-3k+4 < i < j \le p-2$, we have $S_{i,j} \le 1 \le d(u_{i},u_{j})$. Assume $j$ = $i+2 \geq 3$ in the remaining proof, so that $S_{i,j}$ = $L(u_{i})+2L(u_{i+1})+L(u_{i+2})-(2m+1)$. If $1 \le i \le n-4$, then $u_{t}$ with $i \le t \le j-2$ satisfies (b) and (c) in Theorem \ref{thm:cor1} and hence (\ref{eq:dij}) is satisfied. If $n - 4 < i \le n-2$, or $n-2 < i \le p-3k+2$, or $p-3k+3 \le i \le p-3k+4$ and $k \ge 3$, then $S_{i,j} \le 2 \leq d(u_{i},u_{j})$. If $p-3k+4 < i \le p-4$, then $L(u_{i}) + 2L(u_{i+1}) + L(u_{i+2}) \leq 3m+1$ and hence $S_{i,j} \leq m \leq d(u_{i},u_{j})$. This proves Claim 1.
Since $L(u_{0})+L(u_{p-1}) = 2$, we obtain \eqref{eq:cub} immediately from Theorem \ref{thm:ub1} and Claim 1.
In view of \eqref{eq:clb} and \eqref{eq:cub}, it remains to prove ${\rm rn}(C(n,k)) \ne \frac{1}{2}\left((n-2)^{2}+1\right)(k-1)+n-1$. Suppose otherwise. Then by Theorem \ref{thm:ub} there exists a linear order $u_{0}, u_{1}, \ldots, u_{p-1}$ of the vertices of $C(n,k)$ satisfying (\ref{eq:dij}) such that $L(u_{0})+L(u_{p-1}) = 1$ and $u_{i}$ and $u_{i+1}$ are in different branches. Denote by $T, T'$ the branches of $C(n,k)$ containing $v_1, v_n$, respectively. (Each of the other $k-2$ branches contains only one vertex.) Denote $S = \{u: u \in V(T), L(u) = m\}$ and $S^{'} = \{u: u \in V(T'), L(u) = m\}$. Then $|S| = |S^{'}| = k-1$. Since $L(u_{0})+L(u_{p-1}) = 1$ and span$(f)-f$ is an optimal radio labeling whenever $f$ is an optimal radio labeling, without loss of generality we may assume that $u_{0}$ = $v_{m+1}$ and $u_{p-1} \in N(u_{0})$.
Since $u_{0} = v_{m+1}$ and $u_{i}$ and $u_{i+1}$ are in different branches, there exists a vertex $u_{t} \in S$ such that $d(u_{t-1}, u_{t}) \geq m+a$, $d(u_{t}, u_{t+1}) \geq m + b$, $d(u_{t-1}) \neq 1$ and $d(u_{t+1}) \neq 1$ for some $a, b \geq 1$ with $a \neq b$. Hence $S_{t-1, t+1} = L(u_{t-1})+2L(u_{t})+L(u_{t+1})-(2m+1) = (m+a)+(m+b)-(2m+1) = a+b-1 > |a-b| = d(u_{t-1},u_{t+1})$, contradicting the assumption that (\ref{eq:dij}) is satisfied for any $0 \le i < j \le p-1$. Therefore, ${\rm rn}(C(n,k)) = \frac{1}{2}\left((n-2)^{2}+1\right)(k-1)+n$.
\textsf{Case 2:} $n = 2m$ is even.
In this case, $C(n,k)$ has two adjacent weight centres, namely $v_{m}$ and $v_{m+1}$, and so $\varepsilon = 0$. It suffices to prove the existence of a linear order $u_{0}, u_{1}, \ldots, u_{p-1}$ of the vertices of $C(n,k)$ such that the conditions of Theorem \ref{thm:ub} are satisfied. Set $u_{0} = v_m$ and $u_{p-1} = v_{m+1}$. Set \begin{equation}\label{ord2} u_{t} = v_{i}, \mbox{ where } t = \left\{
\begin{array}{ll}
2(m-i), & \hbox{if } 1 \le i \le m-1 \\[0.2cm]
2(2m-i)+1, & \hbox{if } m+2 \le i \le 2m.
\end{array} \right. \end{equation} We obtain $u_1, \ldots, u_{n-2}$ in this way. Let \begin{equation*} u_{t} = v_{i,j}, \mbox{ where } t = \left\{
\begin{array}{ll}
2m+2(k-2)(i-2)+2(j-1), & \hbox{if }i \leq m,\ 1 \le j \le k-2 \\[0.2cm]
2m+2(k-2)(i-m-1)+2j-3, & \hbox{if }i>m,\ 1 \le j \le k-2.
\end{array} \right. \end{equation*} to obtain $u_{n-1}, \ldots, u_{p-2}$. Note that $\{u_{0}, u_{p-1}\} = \{v_m, v_{m+1}\} = W(C(n,k))$ and $u_{i}$ and $u_{i+1}$ are in opposite branches for $1 \leq i \leq p-1$. It remains to prove the following:
\textsf{Claim 2:} The linear order $u_{0}, u_{1}, \ldots, u_{p-1}$ above satisfies condition (\ref{eq:dij}).
In fact, denoting the right-hand side of (\ref{eq:dij}) with respect to the order above by $S_{i,j}$, we have $S_{i,j} = \sum_{t=i}^{j-1}(L(u_{t})+L(u_{t+1})-2m+1)+2m$. It is easy to verify that (\ref{eq:dij}) is satisfied when $j$ = $i+1$, or $i$ = 0, or $i$ = $p-1$. If $j \geq i+3$, then for $n = 4$ we have $L(u_{t})+L(u_{t+1}) \leq m$ for $i \leq t \leq j-1$ and hence $S_{i,j} \leq (j-i)(m-(2m-1))+2m \leq 3(-m+1)+2m = -m+3 \leq 1 \leq d(u_{i},u_{j})$; and for $n \geq 6$ we have $L(u_{t})+L(u_{t+1}) \leq m+1$ for $i \leq t \leq j-1$ and hence $S_{i,j} \leq (j-i)(m+1-(2m-1))+2m \leq 3(-m+2)+2m = -m+6 \leq 3 \leq d(u_{i},u_{j})$. If $j$ = $i+2 \geq 3$, then $S_{i,j} = L(u_{i})+2L(u_{i+1})+L(u_{i+2})-2m+2$. If both $u_{i}$ and $u_{j}$ are on the path $v_{1} v_{2} \ldots v_{n}$, then they satisfy (b) and (d) in Theorem \ref{thm:cor1} and hence satisfy (\ref{eq:dij}). If $u_{i}$ is on the path $v_{1} v_{2} \ldots v_{n}$ but $u_{j}$ is not, then $S_{i,j} = 2 \leq d(u_{i},u_{j})$ since $L(u_{t})+L(u_{t+1}) \leq m$ for every $t$. If neither $u_{i}$ nor $u_{j}$ is on the path $v_{1} v_{2} \ldots v_{n}$, then either $L(u_{i})+2L(u_{i+1})+L(u_{i+2}) \leq 2m$ or $L(u_{i})+2L(u_{i+1})+L(u_{i+2}) \leq 2m+1$, and hence $S_{i,j} = 2 \leq d(u_{i},u_{j})$ or $S_{i,j} = 3 \leq d(u_{i},u_{j})$, respectively. This completes the proof of Claim 2.
So far we have completed the proof of (\ref{rn:cater}). Moreover, by Theorems \ref{thm:ub} and \ref{thm:ub1}, the labeling given by (\ref{eq:f0})-(\ref{eq:f}) with respect to the linear order above is an optimal radio labeling of $C(n,k)$.
$\Box$
\end{proof}
The reader is referred to Fig. \ref{Cater1} and \ref{Cater2} for an illustration of naming, ordering and labeling of the vertices of $C(9,4)$ and $C(10,4)$ by using the procedure in the proof of Theorem \ref{thm:cater}. \begin{figure}
\caption{\small An optimal radio labeling of $C(9,4)$ together with the corresponding ordering of vertices.}
\label{Cater1}
\end{figure}
\begin{figure}
\caption{\small An optimal radio labeling of $C(10,4)$ together with the corresponding ordering of vertices.}
\label{Cater2}
\end{figure}
\section{Concluding remarks} \label{sec:rem}
It is well known that the centre of any tree $T$ consists of one vertex $r$ or two adjacent vertices $r, r'$, depending on whether ${\rm diam}(T)$ is even or odd. We may think of $T$ as a rooted tree with root $r$ or $\{r, r'\}$, respectively. Hal\'{a}sz and Tuza \cite{Tuza} defined a \emph{level-wise regular tree} to be a tree $T$ in which all vertices at distance $i$ from root $r$ or $\{r, r'\}$ have the same degree, say $m_{i}$, for $0 \le i \le h$, where $h$ is the \emph{height} of $T$, namely the largest distance from a vertex to the root. If $m_{0} = m_{1} = \cdots = m_{h-1} = m$ and $m_{h} = 1$, then $T$ is called an \emph{internally $m$-regular complete tree} \cite{Tuza}. It can be verified that the centre and the weight centre of such a tree are identical.
Hal\'{a}sz and Tuza \cite[Theorem 1]{Tuza} proved that the radio number of the internally $(m+1)$-regular complete tree $T$ with diameter $d$ and height $h = \lfloor d/2 \rfloor$, where $d, m \geq 3$, is given by \begin{equation}\label{rn:levelT} {\rm rn}(T) = \left\{
\begin{array}{ll}
m^{h}+\frac{4m^{h+1}-2hm^{2}-4m+2h}{(m-1)^{2}}, & \hbox{if }d=2h \\[0.2cm]
2m^{h}+\frac{6m^{h+1}-2m^{h}-(2h+1)m^{2}-4m+2h+1}{(m-1)^{2}}, & \hbox{if }d=2h+1.
\end{array} \right. \end{equation} This result can be proved by using Theorem \ref{thm:ub}, as shown independently in an earlier version of the present paper. (An extended abstract of that version can be found in \cite{BVZ}.) A sketch of our proof is as follows.
The order of $T$ is $$ p = \left\{ \begin{array}{ll} 1+\frac{m+1}{m-1} (m^{h}-1), & \mbox{if } d = 2h \\ [0.2cm] 2 \left(1 + \frac{m}{m-1} (m^{h}-1)\right), & \mbox{if } d = 2h+1. \end{array} \right. $$ Using $1 + 2x + 3x^{2} + \cdots + nx^{n-1} = \frac{nx^{n}}{x-1} - \frac{x^{n}-1}{(x-1)^{2}}$, one can see that the total level of $T$ is $$ L(T) =\left\{ \begin{array}{ll} (m+1)\left(\frac{h m^{h}}{(m-1)} - \frac{m^{h}-1}{(m-1)^{2}}\right), & \mbox{if } d = 2h \\ [0.2cm] 2m\left(\frac{h m^{h}}{(m-1)} - \frac{m^{h}-1}{(m-1)^{2}}\right), & \mbox{if } d = 2h+1. \end{array} \right. $$ Using these, one can verify that for $T$ the right-hand sides of (\ref{eq:ub}) and (\ref{rn:levelT}) are equal. Thus to prove (\ref{rn:levelT}) it suffices to prove the existence of an linear order of the vertices of $T$ satisfying the conditions of Theorem \ref{thm:ub}.
\textsf{Case 1:} $T$ has only one central vertex, say $w$.
In this case, $w$ is the unique weight centre of $T$. Denote the children of $w$ by $w^{1}, w^{2}, \ldots, w^{m+1}$. Denote the $m$ children of each $w^{t}$ by $w_{0}^{t}, w_{1}^{t}, \ldots, w_{m-1}^{t}$, $1 \le t \le m+1$. Denote the $m$ children of each $w_{i}^{t}$ by $w_{i0}^{t}, w_{i1}^{t}, \ldots, w_{i(m-1)}^{t}$, $0 \leq i \leq m-1$, $1 \leq t \leq m+1$. Inductively, denote the $m$ children of $w_{i_{1},i_{2},\ldots,i_{l}}^{t}$ ($0 \leq i_{1}, i_{2}, \ldots, i_{l} \leq m-1$, $1 \leq t \leq m+1$) by $w_{i_{1}, i_{2},\ldots,i_{l}, i_{l+1}}^{t}$ where $0 \le i_{l+1} \le m-1$. Continue this until all vertices of $T$ are indexed this way. Rename the vertices of $T$ as follows: For $1 \leq t \leq m+1$, set $$ v_{j}^{t} := w_{i_{1},i_{2}, \ldots, i_{l}}^{t},\;\, \mbox{where}\;\, j = 1 + i_{1} + i_{2}m + \cdots + i_{l}m^{l-1} + \sum_{l+1 \leq t \leq \lfloor d/2 \rfloor} m^{t}. $$
Set $u_{0} := w$. For $1 \leq j \leq p-m-2$, let $$ u_{j} := \left\{ \begin{array}{ll} v_{s}^{t},\;\,\mbox{where $s = \lceil j/(m+1) \rceil$}, & \mbox{if $j \equiv t$ (mod $(m+1)$) for some $t$ with $1 \le t \le m$} \\ [0.3cm] v_{s}^{m+1},\;\,\mbox{where $s = \lceil j/(m+1) \rceil$}, & \mbox{if $j \equiv 0$ (mod $(m+1)$)}. \end{array} \right. $$ Let $$ u_{j} := w^{j-p+m+2},\;\, p-m-1 \leq j \leq p-1. $$ Note that $u_{p-1} = w^{m+1}$ is adjacent to $w$. Note also that $u_{i}$ and $u_{i+1}$ are in different branches so that $\phi(u_{i},u_{i+1}) = 0$, for $1 \leq i \leq p-2$.
\textsf{Case 2:} $T$ has two adjacent central vertices, say $w$ and $w'$.
In this case, $w$ and $w'$ are also weight centres of $T$. Denote the neighbours of $w$ other than $w'$ by $w_{0}, w_{1}, \ldots, w_{m-1}$ and the neighbours of $w'$ otherwise than $w$ by $w'_{0}, w'_{1}, \ldots, w'_{m-1}$. For $0 \le i \le m-1$, denote the $m$ children of each $w_{i}$ (respectively, $w'_{i}$) by $w_{i0}, w_{i1}, \ldots, w_{i(m-1)}$ (respectively, $w'_{i0}, w'_{i1}, \ldots, w'_{i(m-1)}$). Inductively, for $0 \leq i_{1},i_{2},\ldots,i_{l} \leq m-1$, denote the $m$ children of $w_{i_{1},i_{2}, \ldots, i_{l}}$ (respectively, $w'_{i_{1},i_{2},\ldots,i_{l}}$) by $w_{i_{1},i_{2},\ldots,i_{l},i_{l+1}}$ (respectively, $w'_{i_{1},i_{2},\ldots,i_{l},i_{l+1}}$), where $0 \leq i_{l+1} \leq m-1$. Rename $$ v_{j} := w_{i_{1},i_{2},\ldots,i_{l}},\;\, v'_{j} := w'_{i_{1},i_{2},\ldots,i_{l}},\;\, \mbox{where}\;\, j = 1 + i_{1} + i_{2}m + \cdots + i_{l}m^{l-1} + \sum_{l+1 \leq t \leq \lfloor d/2 \rfloor} m^{t}. $$
Let $u_0 := w$ and $u_{p-1} := w'$. For $1 \leq j \leq p-2$, let \begin{eqnarray*} u_j := \left\{ \begin{array}{ll} v_{s},\;\,\mbox{where $s = \lceil j/2 \rceil$}, & \mbox{if $j \equiv 0$ (mod $2$)}\\ [0.3cm] v'_{s},\;\,\mbox{where $s = \lceil j/2 \rceil$}, & \mbox{if $j \equiv 1$ (mod $2$)}. \end{array} \right. \end{eqnarray*} Then $u_{i}$ and $u_{i+1}$ are in opposite branches for $1 \leq i \leq p-2$, and $u_{i+2j}$, $j = 0,1, \ldots, (m-1)$ are in different branches for $1 \leq i \leq p-2m+1$, so that $\phi(u_{i},u_{i+1}) = 0$ and $\delta(u_{i},u_{i+1}) = 1$.
In each case above, it can be verified that the linear order $u_{0}, u_{1}, \ldots, u_{p-1}$ satisfies the conditions of Theorem \ref{thm:ub} (details are omitted). So (\ref{rn:levelT}) follows.
The reader is referred to Fig. \ref{Fig3} and \ref{Fig4New} for an illustration of naming, ordering and labeling of the vertices of two internally $3$-regular complete trees of hight 3 by using the above procedure.
\begin{figure}
\caption{\small An optimal radio labeling of the internally 3-regular complete tree with hight 3 and one central vertex.}
\label{Fig3}
\end{figure}
\begin{figure}
\caption{\small An optimal radio labeling of the internally 3-regular complete tree with hight 3 and two central vertices.}
\label{Fig4New}
\end{figure}
A \emph{complete $m$-ary tree of hight $k$}, denoted by $T_{k,m}$, is a rooted tree such that each vertex other than leaves has $m$ children and all leaves are at distance $k$ from the root. Li \emph{et al.} \cite[Theorem 2]{Li} proved that, for $m \geq 3$ and $k \geq 2$, \begin{equation} \label{rn:tkm} {\rm rn}(T_{k,m}) = \frac{m^{k+2}+m^{k+1}-2km^{2}+(2k-3)m+1}{(m-1)^{2}}. \end{equation} This result can also be proved by using Theorem \ref{thm:ub}. The order, diameter and total level of $T_{k,m}$ are $p = (m^{k+1}-1)/(m-1)$, $d = 2k$ and $L(T_{k,m}) = (km^{k+2}-(k+1)m^{k+1}+m)/(m-1)^{2}$, respectively. Plugging these into (\ref{eq:ub}), one can verify that the right-hand sides of (\ref{eq:ub}) and \eqref{rn:tkm} are identical. It can be verified that the linear order $u_{0}, u_{1}, \ldots, u_{p-1}$ of the vertices of $T_{k,m}$ given in \cite[Section 4.1]{Li} satisfies the conditions of Theorem \ref{thm:ub}. Thus we obtain \eqref{rn:tkm} by Theorem \ref{thm:ub}.
\end{document} | arXiv |
nLab > Latest Changes: Jucys-Murphy element
CommentTimeMay 18th 2021
Format: MarkdownItexStarting something, to record Jucys' computation of the eigenvalues of, essentially, the Cayley distance kernel. (Thanks to David Speyer for pointing this out.) But I haven't obtained copies of any of the original references yet. This page currently goes entirely by what it says on Wikipedia at *[Jucys-Murphy element](https://en.wikipedia.org/wiki/Jucys%E2%80%93Murphy_element)*. <a href="https://ncatlab.org/nlab/revision/Jucys-Murphy+element/1">v1</a>, <a href="https://ncatlab.org/nlab/show/Jucys-Murphy+element">current</a>
Starting something, to record Jucys' computation of the eigenvalues of, essentially, the Cayley distance kernel. (Thanks to David Speyer for pointing this out.)
But I haven't obtained copies of any of the original references yet. This page currently goes entirely by what it says on Wikipedia at Jucys-Murphy element.
CommentAuthorDavidRoberts
(edited May 18th 2021)
Author: DavidRoberts
Format: MarkdownItexThat [1971 article](http://www.itpa.lt/LFD/Lfz/LFZ.html) seems like it might be hard to get!
That 1971 article seems like it might be hard to get!
Format: MarkdownItexFor what it's worth, the Math Review of Jucys' 1966 article looks like it could also be it (lazy cut and paste without TeXing the maths): >The author gives new explicit expressions for the matrix units in the group algebra of the symmetric group Sn. Let Rn be the group algebra of Sn over the field of the rational functions of n variables x1,⋯,xn with real coefficients, and define the element psr of Rn by psr=(ε+(xs−xr)−1(sr)), where ε denotes the unit element and (sr) the transposition of s and r (r,s=1,⋯,n;r≠s). In order to obtain the primitive idempotent elements, a product of 12n(n−1) elements psr is formed. The primitive idempotent elements are the values of that product for certain integral values of the variables which depend on the standard tableaux. The other matrix units are obtained in a similar way. Here's the review of the 1971 article, for completeness: >The primitive idempotents of Young's "orthogonal'' representation are factorized in a way involving only mutually commuting factors of the type (rikε+Dk) (i≤k=1,2,⋯,n), ε denoting the unity of the group and Dk the sum of transpositions (kp) with p<k. The values of the real numbers rik depend on the standard Young tableaux defining the primitive idempotent (projection operator). And [this 1974 article of Jucys](https://doi.org/10.1016%2F0034-4877%2874%2990019-6) cites both the earlier works, so perhaps it may be extracted from the context of the citations which of the two is needed for the theorem. **Added** I suspect you are correct, Urs, based on my perusal of (Jucys 1974). He talks about the content of a Young diagram being the eigenvalues (well, "proper values", which is a real flashback!) and cites his 1971 paper.
For what it's worth, the Math Review of Jucys' 1966 article looks like it could also be it (lazy cut and paste without TeXing the maths):
The author gives new explicit expressions for the matrix units in the group algebra of the symmetric group Sn. Let Rn be the group algebra of Sn over the field of the rational functions of n variables x1,⋯,xn with real coefficients, and define the element psr of Rn by psr=(ε+(xs−xr)−1(sr)), where ε denotes the unit element and (sr) the transposition of s and r (r,s=1,⋯,n;r≠s). In order to obtain the primitive idempotent elements, a product of 12n(n−1) elements psr is formed. The primitive idempotent elements are the values of that product for certain integral values of the variables which depend on the standard tableaux. The other matrix units are obtained in a similar way.
Here's the review of the 1971 article, for completeness:
The primitive idempotents of Young's "orthogonal" representation are factorized in a way involving only mutually commuting factors of the type (rikε+Dk) (i≤k=1,2,⋯,n), ε denoting the unity of the group and Dk the sum of transpositions (kp) with p<k. The values of the real numbers rik depend on the standard Young tableaux defining the primitive idempotent (projection operator).
And this 1974 article of Jucys cites both the earlier works, so perhaps it may be extracted from the context of the citations which of the two is needed for the theorem.
Added I suspect you are correct, Urs, based on my perusal of (Jucys 1974). He talks about the content of a Young diagram being the eigenvalues (well, "proper values", which is a real flashback!) and cites his 1971 paper.
Format: MarkdownItexThanks, that's better than nothing. It sounds like it is indeed the 1971 article. But this theorem surely must have been recorded somewhere beyond this original article and the Wikipedia page? But I can't find it anywhere. <a href="https://ncatlab.org/nlab/revision/Jucys-Murphy+element/1">v1</a>, <a href="https://ncatlab.org/nlab/show/Jucys-Murphy+element">current</a>
Thanks, that's better than nothing.
It sounds like it is indeed the 1971 article.
But this theorem surely must have been recorded somewhere beyond this original article and the Wikipedia page? But I can't find it anywhere.
Format: MarkdownItexCould you send me the 1974 article? I was about to download it, but my phone battery died the moment I was authenticating my library access.
Could you send me the 1974 article? I was about to download it, but my phone battery died the moment I was authenticating my library access.
Format: MarkdownItexOK, will do.
OK, will do.
Format: MarkdownItexMurphys' article <https://doi.org/10.1016/0021-8693(92)90045-N> might also be worth looking at (no paywall here). It has some background including work related to what Jucys did. And it cites (indirectly) a paper by one Thrall, which I think is this <https://doi.org/10.1215/S0012-7094-41-00852-9> (the citation is via a book on representation theory of the summetric groups by JE Rutherford, the intro of which mentions a 1941 paper by Thrall, and this is the one I found).
Murphys' article https://doi.org/10.1016/0021-8693(92)90045-N might also be worth looking at (no paywall here). It has some background including work related to what Jucys did. And it cites (indirectly) a paper by one Thrall, which I think is this https://doi.org/10.1215/S0012-7094-41-00852-9 (the citation is via a book on representation theory of the summetric groups by JE Rutherford, the intro of which mentions a 1941 paper by Thrall, and this is the one I found).
Format: MarkdownItexThanks. So I have expanded the citations a little, also the entry text. Let me highlight that -- while those eigenvalues are discussed widely in the literature -- what I was looking for is that "factorization of the Cayley distance kernel" $$ \big( t + J_1 \big) \big( t + J_2 \big) \cdots \big( t + J_n \big) \;=\; \underset{ \sigma \in Sym(n) }{\sum} e^{ ln(t) \cdot \# cycles(\sigma) } \, \sigma \;\;\;\;\; \in \mathbb{C}[Sym(n)][t] $$ which the [Wikipedia article](https://en.wikipedia.org/wiki/Jucys%E2%80%93Murphy_element) attributes, unspecifically, to Jucys ("Theorem (Jucys)"). But I guess now this is not much of a theorem: Just multiply out and observe that this generates all permutations in their minimal-number-of-transpositions-form using [this kind](https://ncatlab.org/nlab/show/Cayley+distance#eq:CyclicPermutationAsProductOfTranspositions) of factorizations of its cycles. <a href="https://ncatlab.org/nlab/revision/diff/Jucys-Murphy+element/3">diff</a>, <a href="https://ncatlab.org/nlab/revision/Jucys-Murphy+element/3">v3</a>, <a href="https://ncatlab.org/nlab/show/Jucys-Murphy+element">current</a>
Thanks. So I have expanded the citations a little, also the entry text.
Let me highlight that – while those eigenvalues are discussed widely in the literature – what I was looking for is that "factorization of the Cayley distance kernel"
(t+J 1)(t+J 2)⋯(t+J n)=∑σ∈Sym(n)e ln(t)⋅#cycles(σ)σ∈ℂ[Sym(n)][t] \big( t + J_1 \big) \big( t + J_2 \big) \cdots \big( t + J_n \big) \;=\; \underset{ \sigma \in Sym(n) }{\sum} e^{ ln(t) \cdot \# cycles(\sigma) } \, \sigma \;\;\;\;\; \in \mathbb{C}[Sym(n)][t]
which the Wikipedia article attributes, unspecifically, to Jucys ("Theorem (Jucys)").
But I guess now this is not much of a theorem: Just multiply out and observe that this generates all permutations in their minimal-number-of-transpositions-form using this kind of factorizations of its cycles.
Format: MarkdownItexYeah, it's disappointing when that happens...
Yeah, it's disappointing when that happens…
Format: MarkdownItexPerhaps there's something to harvest from this [MO discussion](https://mathoverflow.net/q/83150/447).
Perhaps there's something to harvest from this MO discussion.
Format: MarkdownItexDavid R.: Sounds like we are talking past each other: I am doubting that the second theorem that [the Wikipedia page](https://en.wikipedia.org/wiki/Jucys%E2%80%93Murphy_element) attributes to Jucys is really something that Jucys claimed as a theorem. It's a little lemma (I just typed out the proof, but now Instiki gives me a mysterious error message when trying to submit). It's only with Jucys's actual theorem -- the determination of those eigenvalues of the JM elements -- that this lemma becomes interesting. But it remains unclear whether Jucys made that connection. If he made it in his 1971 article (which we haven't seen), then he didn't find it worth to include in the review in his 1974 article (which we have seen).
David R.: Sounds like we are talking past each other:
I am doubting that the second theorem that the Wikipedia page attributes to Jucys is really something that Jucys claimed as a theorem.
It's a little lemma (I just typed out the proof, but now Instiki gives me a mysterious error message when trying to submit).
It's only with Jucys's actual theorem – the determination of those eigenvalues of the JM elements – that this lemma becomes interesting.
But it remains unclear whether Jucys made that connection. If he made it in his 1971 article (which we haven't seen), then he didn't find it worth to include in the review in his 1974 article (which we have seen).
Format: MarkdownItexI have spelled out the proof of that factorization statement [here](https://ncatlab.org/nlab/show/Jucys-Murphy+element#FactorizationOfCayleyDistanceKernelAsCharPolynomialOfJMElements). <a href="https://ncatlab.org/nlab/revision/diff/Jucys-Murphy+element/5">diff</a>, <a href="https://ncatlab.org/nlab/revision/Jucys-Murphy+element/5">v5</a>, <a href="https://ncatlab.org/nlab/show/Jucys-Murphy+element">current</a>
I have spelled out the proof of that factorization statement here.
Format: MarkdownItexOh, I see, yes, I think I misunderstood what you meant.
Oh, I see, yes, I think I misunderstood what you meant.
Format: MarkdownItexMore treatment of this area [here](https://mathoverflow.net/a/151375/447) maybe with some useful references.
More treatment of this area here maybe with some useful references.
Format: MarkdownItexIf you want to go deeper into this seminormal business, let's think about what this would do for our purpose: Maybe one advantage of the seminormal basis $\big\{v_T\big\}_{T \in sYT_n}$ over the basis $ \big\{ S^{(\lambda)}_{i,j} \big\}_{ { \lambda \in Part(n) } \atop { 1 \leq i,j \leq dim(S^{(\lambda)}) } } $ that we used to consider is better compatibility with the inclusions $Sym(n) \subset Sym(n+1)$. Somehow. Not sure yet how to make use of this.
If you want to go deeper into this seminormal business, let's think about what this would do for our purpose:
Maybe one advantage of the seminormal basis {v T} T∈sYT n\big\{v_T\big\}_{T \in sYT_n} over the basis {S i,j (λ)} λ∈Part(n)1≤i,j≤dim(S (λ)) \big\{ S^{(\lambda)}_{i,j} \big\}_{ { \lambda \in Part(n) } \atop { 1 \leq i,j \leq dim(S^{(\lambda)}) } } that we used to consider is better compatibility with the inclusions Sym(n)⊂Sym(n+1)Sym(n) \subset Sym(n+1). Somehow. Not sure yet how to make use of this. | CommonCrawl |
An efficient analytical reduction of detailed nonlinear neuron models
Oren Amsalem ORCID: orcid.org/0000-0002-8070-03781,
Guy Eyal ORCID: orcid.org/0000-0002-9537-55711,
Noa Rogozinski ORCID: orcid.org/0000-0001-7130-03751,
Michael Gevaert2,
Pramod Kumbhar ORCID: orcid.org/0000-0002-1756-801X2,
Felix Schürmann ORCID: orcid.org/0000-0001-5379-85602 &
Idan Segev ORCID: orcid.org/0000-0001-7279-96301,3
Nature Communications volume 11, Article number: 288 (2020) Cite this article
Biophysical models
Cellular neuroscience
Ion channels in the nervous system
Detailed conductance-based nonlinear neuron models consisting of thousands of synapses are key for understanding of the computational properties of single neurons and large neuronal networks, and for interpreting experimental results. Simulations of these models are computationally expensive, considerably curtailing their utility. Neuron_Reduce is a new analytical approach to reduce the morphological complexity and computational time of nonlinear neuron models. Synapses and active membrane channels are mapped to the reduced model preserving their transfer impedance to the soma; synapses with identical transfer impedance are merged into one NEURON process still retaining their individual activation times. Neuron_Reduce accelerates the simulations by 40–250 folds for a variety of cell types and realistic number (10,000–100,000) of synapses while closely replicating voltage dynamics and specific dendritic computations. The reduced neuron-models will enable realistic simulations of neural networks at unprecedented scale, including networks emerging from micro-connectomics efforts and biologically-inspired "deep networks". Neuron_Reduce is publicly available and is straightforward to implement.
Compartmental models (CMs) were first employed by Wilfrid Rall1 to study the integrative properties of neurons. They enabled him to explore the impact of spatio-temporal activation of conductance-based dendritic synapses on the neuron's output and the effect of the dendritic location of a synapse on the time course of the somatic excitatory postsynaptic potential2. By simulating electrically distributed neuron models, Rall demonstrated how the cable properties of dendrites explain the variety of somatic excitatory postsynaptic potential (EPSP) shapes that were recorded at the soma of α-motoneurons, thus negating the dominant explanation at that time that the differences in shapes of the somatic EPSPs in these cells result from differences in the kinetics of the respective synapses. This was an impressive example that faithful models of the neuron (as a distributed rather than a "point" electrical unit) are essential for the correct interpretation of experimental results. Since Rall's 1964 and 1967 studies using CMs, EPSP "shape indices" measured at the soma are routinely used for estimating the electrotonic distance of dendritic synapses from the soma.
Over the years, detailed CMs of neurons have provided key insights into hundreds of experimental findings, both at the single-cell and the network levels. A notable example at the single-cell level is the explanation as to why the somatic Na+ action potential propagates backward in the soma-to-dendrites direction and (typically) not vice versa3. CMs have also pinpointed the conditions for the generation of local dendritic Ca2+ spikes4,5,6 and provided an explanation for the spatial restriction of the active spread of dendritic spikes from distal dendrites to the soma7 and see also refs. 8,9,10,11,12,13,14. Today, detailed CMs are even being used for simulating signal processing in human pyramidal neurons, including their large numbers of dendritic spines/synapses15.
At the network level, detailed CMs are utilized for such noteworthy projects as large-scale simulations of densely in silico reconstructed cortical circuits16,17 and the overarching goal of the Allen Institute to simulate large parts of the visual system of the mouse18,19. Because detailed compartmental modeling is increasingly becoming an essential tool for the understanding of diverse neuronal phenomena, major efforts have been invested in developing user-friendly computer software that implements detailed CMs, the best known of which are NEURON20, GENESIS21, NeuroConstruct22, PyNN23, and, recently, BioNet24, NTS25, NetPyNE26, and Geppetto27.
Modern personal computers can simulate tens of seconds of electrical activity of single neurons comprising thousands of nonlinear compartments and synapses. However, they handle poorly cases where many model configurations need to be evaluated such as in large-scale parameter fitting for single-neuron models5,28, or when the dendritic tree is morphologically and electrically highly intricate and consists of tens of thousands of dendritic synapses, as with the human cortical pyramidal neurons15. When the aim is to simulate a neuronal network consisting of hundreds of thousands of such neurons, only very powerful computers can cope. For example, the simulation of a cortical network consisting of 200,000 detailed neuron models on the BlueGene/Q supercomputer takes several hours to simulate 30 s of biological time17.
To overcome this obstacle, two approaches have been pursued. The first involves developing alternative, cheaper, and more efficient computing architectures (e.g., neuromorphic-based computers29,30). These have not yet reached the stage where they can simulate large-scale network models with neurons consisting of branched nonlinear dendrites having a realistic number of synapses. The other approach is to simplify neuron models while preserving their input/output relationship as faithfully as possible. Rall31 was the first to suggest a reduction scheme in his "equivalent cylinder" model, which showed that, for certain idealized passive dendritic trees, the whole tree could be collapsed into a single cylinder that was analytically identical to the detailed tree. The "equivalent cylinder" preserves the total dendritic membrane area, the electrotonic length of the dendrites, and, most importantly, the postsynaptic potential (amplitude and time course) at the soma for a dendritic synapse when mapped to its respective electrotonic location on the "equivalent cylinder"32,33. However, this method is not applicable for dendritic trees with large variability in their cable lengths (e.g., pyramidal neurons with a long apical tree and short basal trees), conductance-based synapses, or for dendrites with nonlinear membrane properties.
Over the years, several different reduction schemes have been proposed; for example, a recent work mapped all the synapses to a single compartment, taking the filtering effect of the dendrites into account34. Other methods reduce the detailed morphology to a simplified geometric model while preserving the total membrane area35,36,37 or the axial resistivity38; see also refs. 12,39,40. However, these methods have a variety of drawbacks; in particular, they are either "hand fitted" and thus lack a clear analytical underpinning or are complicated to implement, and in some cases, their computational advantage for realistic numbers (thousands) of synapses is not quantified. Most of these methods do not support dendrites with active conductances35,38,39,41,42 and they have not been tested on a broad range of neuron types. Importantly, none of the previous methods provided an easy-to-use open access implementation. Thus, today there is no simple, publicly available reduction method for neuron models that can be used by the extensive neuroscience and machine-learning communities.
To respond to this need, the present study provides an analytic method for reducing the complexity of detailed neuron models while faithfully preserving the essential input/output properties of these models. Neuron_Reduce is based on key theoretical insights from Rall's cable theory, and its implementation for any neuron type is straightforward without requiring hand-tuning. Depending on the neuron modeled and the number of synapses, Neuron_Reduce accelerates the simulation run-time by a factor of up to 250 while preserving the identity of individual synapses and their respective dendrites. It also preserves specific membrane properties and dendritic nonlinearities, hence maintaining specific dendritic computations. Neuron_Reduce is easy to use, fully documented, and publicly available on GitHub (https://github.com/orena1/neuron_reduce).
Mapping of a detailed neuron model to a multi-cylinder model
The thrust of our analytical reduction method (Neuron_Reduce) is described in Fig. 1a–c. This method is based on representing each of the original stem dendrites by a single cylindrical cable, which has the same specific membrane resistivity (Rm, in Ωcm2), capacitance (Cm, in F/cm2), and axial resistivity (Ra, in Ωcm) as in the detailed tree (Fig. 1a). Also, each cylindrical cable satisfies two constraints: (i) the magnitude of the transfer impedance, \(| {Z_{0,L}\left( \omega \right)} | = | {V_0\left( \omega \right)/I_L\left( \omega \right)} |\), from its distal sealed end (X = L) to its origin at the soma end (X = 0) is identical to the magnitude of the transfer impedance from the electrotonically most distal dendritic tip to the soma in the respective original dendrite; (ii) at its proximal end (X = 0), the magnitude of the input impedance, \(| {Z_{0,0}\left( \omega \right)} | = | {V_0\left( \omega \right)/I_0\left( \omega \right)} |\), is identical to that of the respective stem dendrite (when decoupled from the soma). As shown in Eqs. (1)–(11) (Methods), these two constraints, while preserving the specific membrane and axial properties, guarantee a unique cylindrical cable (with a specific diameter and length) for each of the original dendrites.
Fig. 1: An analytic method for reducing neuron model complexity (Neuron_Reduce).
a Detailed passive model of 3D reconstructed L5 thick-tufted pyramidal cell from rat neocortex. Its nine stem dendrites (one apical and 8 basal) are depicted in different colors. b Each original stem dendrite is reduced to a single cylinder that retains the specific passive cable properties (Rm, Cm, and Ra) of the original tree. The diameter and length of the respective cylinders are computed analytically using Eqs. (1)–(11), such that each cylinder preserves both the transfer resistance from the most electrotonically distal dendritic tip to the soma as well as the input resistance at the soma end of the corresponding stem dendrite. This generates a unique cylindrical cable for each of the original stem dendrites. Scale bars in a, b are 100 µm. c Synapses with similar transfer resistance to the soma (exemplar synapses are marked as 1–4 at top right) are all mapped to the respective locus in the reduced cylinder so that their transfer resistance is similar in the two models. In the reduced model, these synapses are merged into one "NEURON" process (red synapse in b), but they retain their individual activation time (see Methods and Supplementary Fig. 1). The same mapping also holds for active membrane conductances (yellow region, denoting the Ca2+ "hot spot" in the apical tree). d Transfer impedance \((Z_{d,0} = Z_{0,d})\) between point d on the apical tree (shown in a, b) and the soma (X = 0) as a function of the input frequency in both the detailed (black trace) and the reduced (red trace) models. e Composite somatic EPSPs resulting from sequential activation of the four distal apical synapses shown in c in the detailed model (black trace) and the reduced model (red trace). In this simulation the dendritic tree was passive. The synapses were activated in temporal order 1, 2, 3, 4 as shown by the vertical lines below the composite EPSP. The respective peak conductances of these AMPA-based synapses were 0.6, 0.3, 0.4, and 0.4 nS (details in Supplementary Table 2 and see Supplementary Fig. 1 for the active case).
Because the magnitude of the transfer impedance in both the original dendrite and in the respective cylindrical cable spans from \(| {Z_{0,L}(\omega )} |\) to \(| {Z_{0,0}(\omega )} |\), all dendritic loci having intermediate transfer impedance values can be mapped to a specific locus in the respective cylinder that preserves this intermediate transfer impedance. This mapping guarantees (for the passive case) that the magnitude of the somatic voltage response, V0(ω), to an input current, Ix(ω), injected at a dendritic location, x, will be identical in both the detailed and the reduced cylinder models (see Methods). Consequently, synapses and nonlinear ion channels are mapped to their respective loci in the reduced cylinder while preserving the respective transfer impedance to the soma (see Fig. 1, Step B, and Methods). Based on Eqs. (1)–(11), Neuron_Reduce generates a reduced multi-cylindrical tree for any ω value (different reduced models for different ω values). Conveniently, we found a close match between the detailed and the reduced models for ω = 0 (the steady-state case). Therefore, all figures in this work are based on reduced models with ω = 0 (see Discussion).
Neuron_Reduce implemented on L5 pyramidal cell with synapses
In Fig. 1, Neuron_Reduce is implemented on a detailed CM of a 3D reconstructed layer 5 pyramidal neuron from the rat somatosensory cortex (same model as in ref. 5). This neuron consists of eight basal dendrites and one apical dendrite (shown in different colors) stemming from the soma. This neuron model has active membrane ion channels at both the soma and dendrites (see below). However, Neuron_Reduce first treats the modeled tree as passive by abolishing all voltage-dependent membrane conductances, and only retaining the leak conductance. Implementing Eqs. (1)–(11) for this cell produced a reduced, multi-cylindrical, passive model (Fig. 1b, Step A) consisting of only 50 compartments rather than the 642 compartments in the detailed model.
Figure 1c shows an example of four synapses located at different apical branches. These synapses all have the same transfer resistance to the soma in the detailed tree. Therefore, Neuron_Reduce maps these synapses to a single respective locus in the respective cylinder, such that their transfer resistance is identical in both models. In the reduced model, these synapses are merged into one "NEURON" process (red synapse in Fig. 1b). However, they retain their individual activation times (see Methods). Figure 1d compares the transfer impedance between a specific point in the apical tree (marked by "d" in Fig. 1a, b) and the soma. By construction, for the passive case, the transfer resistance (for ω = 0) is equivalent for the respective loci in the detailed and the reduced model. This is indeed the case in Fig. 1d (left-most point on the x-axis), thus validating the implementation of the Neuron_Reduce analytic method. Note that although constructed using ω = 0, the similarity between the detailed and reduced model also holds for higher input frequencies. However, for ω around 10–100 Hz, the transfer impedance from d to soma (and vice versa, due to the reciprocity theorem for passive systems43) is somewhat larger in the reduced model (compare the red and black lines).
To test the performance of Neuron_Reduce on transient synaptic inputs (composed of mixed input frequencies), we sequentially activated the four synapses shown in Fig. 1c in both the detailed and the reduced models (see Methods and Supplementary Table 2). Figure 1e shows the close similarity in the composite somatic EPSPs between the two models, further validating that the mapping of the detailed model to the reduced model using ω = 0 provides satisfactory results for the passive case (see also Supplementary Fig. 2).
Accuracy and speed-up of Neuron_Reduce for nonlinear models
To measure the accuracy of Neuron_Reduce for a fully-active nonlinear neuron model, we ran a comprehensive set of simulations using the well-established case of the L5 pyramidal cell model5 shown in Fig. 2a (same cell as in Fig. 1). This neuron model includes a variety of nonlinear dendritic channels including a voltage-dependent Ca2+ "hot spot" in the apical tuft (schematic yellow region in Fig. 1c) and a Na+-based spiking mechanism in the cell body. We randomly distributed 8000 excitatory and 2000 inhibitory synapses on the modeled dendritic tree (the synaptic parameters are listed in Supplementary Table 2) and used Neuron_Reduce to generate a reduced model for this cell. We simulated the detailed model by randomly activating the excitatory synapses at 5 Hz and the inhibitory synapses at 10 Hz (see Methods). The detailed model responded with an average firing rate of 11.8 Hz (black trace in Fig. 2b; only 2 out of 50 s simulation time are shown). The average firing rate of the respective reduced model in response to the same synaptic input was 11.3 Hz (red trace, Fig. 2b; spike timings are shown by small dots on the top). The cross-correlation between the two spike trains peaked around zero (Fig. 2c), and the inter-spike interval (ISI) distributions of the two models were similar (Fig. 2d).
Fig. 2: Neuron_Reduce faithfully replicated the I/O properties of a detailed nonlinear model of a L5 pyramidal cell.
a Layer 5 pyramidal cell model5 as in Fig. 1a, with 8000 (AMPA + NMDA) excitatory (magenta dots) and 2000 inhibitory synapses (cyan dots, see Supplementary Table 2 for synaptic parameters). Excitatory synapses were activated randomly at 5 Hz and the inhibitory synapses at 10 Hz. This detailed model consists of a dendritic Ca2+ "hot spot" (as in Fig. 1c) and a Na+ spiking mechanism at the cell body. Scale bar 100 µm. b An example of the voltage dynamics at the soma of the detailed model (black trace) and the reduced model (red trace); spike times are represented by the black and red dots above the respective spikes. c Cross-correlation between spikes in the reduced versus the detailed models. d Inter-spike interval (ISI) distributions for the two models. e Output firing rate of the reduced (red) versus the detailed (black) models as a function of the firing rate of the excitatory synapses. Gray dots represent the case shown in b. f SPIKE-synchronization measure between the two models as a function of the firing rate of the detailed model for the case of only AMPA (blue) and AMPA + NMDA synapses (orange). The performance of the reduced model with NMDA synapses was lower for low output frequency, but improved significantly for output frequencies above ~7 Hz (see Discussion). g SPIKE synchronization between the detailed and the reduced models as a function of the firing rate of the detailed model, for active and passive dendrites, and with/without NMDA-based synaptic conductance.
The full range of responses to a random synaptic input for the two models was explored by varying the firing rate of the excitatory (α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA)- and (N-methyl-d-aspartate) (NMDA)-based) synapses and measuring the degree of similarity between the firing rates of the two models, which indicated a good fit between the two (Fig. 2e). We used the SPIKE-synchronization measure44,45 to further quantify the similarity between the spike trains of the detailed and reduced models. The SPIKE-synchronization value for the two spike trains shown in Fig. 2b was 0.8. In Fig. 2f, the SPIKE synchronization was computed as a function of the output rate of the detailed model for both the case where the excitatory synapses consisted of only an AMPA component (blue) and for when they also consisted of an NMDA component (orange). For the AMPA-only case, the SPIKE synchronization was high for all output frequencies, but was poor for low output frequencies when the synapses consisted of an NMDA component, although improving significantly for output frequencies above ~7 Hz (see Discussion). Figure 2g shows the SPIKE-synchronization as a function of the firing rate of the detailed model, for active and passive dendrites and with/without NMDA-based synaptic conductance, demonstrating again that when NMDA synapses are involved, the performance of the reduced model is low for low output rates. We also tested other spike trains similarity metrics46,47 (Supplementary Fig. 3) and found comparable results to those shown in Fig. 2. We have also analyzed the performance of Neuron_Reduce on two additional patterns of synaptic input. In one case, the synaptic input was activated in an oscillatory manner at different frequencies (see Methods). In these cases, the spike-synchronization measure ranged between 0.75 and 1 (Supplementary Fig. 4a, b). In the other case, the synaptic input was taken from a spontaneously active Blue Brain circuit17 (see Methods). In this case, the spike-synchronization measure was 0.71 (Supplementary Fig. 4c).
We compared the performance of our reduction method to two other reduction approaches, one of which was Rall's "equivalent cable" reduction method31,48. The other method maps all the dendritic synapses to the somatic compartment, after computing the filtering effect of the dendritic cable for each synapse34 (see Methods). Neuron_Reduce outperformed both these reduction methods (Supplementary Fig. 5).
Figure 3 compares the run-time of the detailed versus the reduced model for the neuron model shown in Fig. 2a. For example, simulating the detailed model with 10,000 synapses for 50 s of biological time required 2906 s of computer time (run-time), whereas it took only 68.7 s in the reduced model, a ~42-fold computational speed-up (see Supplementary Table 1). The larger the number of synapses in the detailed model, the longer the run-time (Fig. 3a). In contrast, the run-time in the reduced model is only shallowly dependent on the number of synapses. This is expected when considering the synaptic merging step in our algorithm (see Discussion). The run-time of the reduced model depends on the number of compartments per cylinder; it increases sharply with an increasing number of compartments (the run-time ratio between the detailed and the reduced models decreases, gray line in Fig. 3b). However, there was no improvement in the SPIKE-synchronization measure when the spatial discretization, ΔX, per compartment was <0.1λ, where λ is the length constant (Fig. 3b blue line and see also previous research on the subject49). Therefore, all the results presented in Figs. 1–7 are based on models with a ΔX that does not exceed 0.1λ.
Fig. 3: Neuron_Reduce enhances the simulation speed by up to several hundred fold.
a Simulation run-time for the detailed (black) and the reduced models (red) of layer 5 pyramidal cell shown in Fig. 2a, for a simulation of 50 s, and their ratio (the speed-up, gray) as a function of the number of simulated (GABAA-, AMPA- and NMDA- based) synapses. Due to the almost constant run-time of the reduced model, the run-time ratio increases with larger number of synapses. Above 75,000 synapses, an additional effect becomes visible: the detailed model no longer fits into the cache of the CPU and exhibits a supralinear increase in run-time. This can be seen by the black curve deviating from the dotted red curve, which shows the expected simulation time for the detailed model assuming a constant computation cost per synapse (see also Supplementary Table 1). b Accuracy (blue) of the reduced model and its speed-up in simulation run-time (gray) as a function of the number of electrical compartments per length constant for a neuron with 10,000 synapses (50 s per simulation).
Fig. 4: The dendritic potential in the reduced model represents the average dendritic voltage dynamics in the detailed model.
a Detailed model (left) and reduced model (right) of the cell shown in Fig. 2. Dendritic branches of the same color in the detailed model are all mapped to the respective compartment with identical color in the reduced model. b For each of the four colored regions shown in a (and respective colored sphere at top left), the voltage transients in individual branches are shown by the gray traces. Superimposed in black is the average voltage of these traces and in red is the voltage transient in the respective compartment in the reduced model. The somatic spikes in the detailed model (black) and reduced model (red) are also shown. The simulation is as in Fig. 2e, with excitatory synapses firing at 5.5 Hz. Scale bars for the respective morphologies are 100 µm.
Fig. 5: Dendritic Ca2+ spike and BAC firing faithfully replicated in the reduced model.
a, b (Left) Detailed L5 pyramidal cell model with nonlinear Ca2+ "hot spot" (same model as in Fig. 2). a Injecting a depolarizing step current to the soma (0.95 nA for 8.5 ms) in the detailed model evoked a somatic action potential, AP (black trace) that propagated backward semi-actively into the apical tree (red trace). b Combining the somatic input with a transient synaptic-like current injection (0.95 nA peak value with 0.5 and 5 ms rise time and decay time, respectively; red transient) to the "hot region" in the apical dendrite evoked a prolonged local Ca2+ spike, which, in turn, triggered a burst of two extra somatic Na+ spikes (the BAC firing phenomenon50). c, d Same as in a, b, but for the reduced model. Scale bars for the respective morphologies are 100 µm.
Fig. 6: Discriminating spatio-temporal input sequences in the detailed versus the reduced model.
a A model of L5PC (detailed model, Fig. 1) with 12 excitatory synapses spatially distributed on one of its basal dendrites (red dots on green basal dendrite). b Somatic responses to sequential activations of its basal synapses in the IN (cyan) and the OUT (blue) directions. In this case, the synaptic model only consists of an AMPA component. c As in b but the synaptic model consists of both AMPA and NMDA components. d Reduced model for the detailed model shown in a. Neuron_Reduce mapped the 12 synapses in the detailed model into five synapses in the reduced model. e, f. As in b, c, but for the reduced model. g Pattern separability (see Methods) of the detailed (black) and the reduced (red) models when the synaptic model only consists of an AMPA component. h As in g, after subtracting the peak voltage obtained in the OUT direction from each of the voltage responses. i, j As in g, h but when the synaptic models consisted of both AMPA and NMDA conductances. Note the similarity between the detailed and the reduced models in terms of pattern separability.
Fig. 7: Neuron_Reduce working successfully on a variety of neuron models.
a–c Detailed models of three somatosensory neurons (left, L6 tufted pyramidal cell in green; middle, L2/3 large basket cell in red; and right, L4 double bouquet cell in blue) and their respective reduced models. Scale bars 100 µm. d–f Voltage responses to an excitatory synaptic input activated at 1.8, 2.9, and 3.17 Hz, respectively, for both the detailed (black) and the reduced models (corresponding colors). The inhibitory input activation rate was 10 Hz for all models. g The SPIKE-synchronization index for the 13 detailed versus reduced neuron models. The mean simulation speed-up for the L6 tufted pyramidal cell, L5 Martinotti cell, and L4 spiny stellate cell were 95, 40, and 60, respectively. See Supplementary Table 2 for cell models and input parameters and Supplementary Fig. 6 for the SPIKE-synchronization measure on additional 88 modeled cells.
In Fig. 4 we compared the dendritic voltage in the detailed model and in the respective location in the reduced model. We found that: (i) the voltage transients could differ significantly in dendritic branches that are all mapped to the same compartment in the reduced model (e.g., compare the gray traces in the yellow compartments in Fig. 4b). (ii) the average voltage trace of these different dendritic branches (black trace in Fig. 4b) is similar to the voltage in the respective compartment in the reduced model (red trace in Fig. 4b). The implications of the latter finding for capturing highly nonlinear local dendritic events is elaborated in the Discussion.
Neuron_Reduce keeps dendritic nonlinearities and computations
To determine the capabilities of the reduced models to support nonlinear dendritic phenomena and dendritic computations, we repeated two classical experiments in both the detailed and the reduced models of the L5 pyramidal cell shown in Fig. 1. The first simulated experiment started by injecting a brief depolarizing step current to the soma of the detailed model to generate a somatic Na+ action potential (AP, black trace in Fig. 5a). This AP propagated backward to the apical dendrite, the BPAP (red trace in Fig. 5a). Repeating the same current injection in the reduced model led to a similar phenomenon, but with a larger BPAP (Fig. 5c). The detailed model also included a "hot region" with voltage-dependent calcium conductances in its apical dendrite (see also Fig. 1). Combining somatic current injection with synaptic-like transient depolarizing current injected to the apical nexus evoked a prolonged Ca2+ spike in the distal apical dendrite (red trace at the apical tree), which, in turn, generated a burst of somatic Na+ spikes (the BPAP-activated Ca2+ spike (BAC) firing4,5,50, Fig. 5b). Neuron_Reduce maps the nonlinear dendritic "hot" Ca2+ region to its respective location in the reduced model (see Fig. 1 and Methods). Figure 5c, d shows that the exact same combination of somatic and dendritic input currents also produced the BAC firing phenomenon in the reduced model. However, the reduced model was somewhat more excitable than the detailed model; this resulted in a burst of three spikes with a higher frequency (and sometimes with an additional spike) in the reduced model (comparison between Fig. 5b–d).
The second simulated experiment attempted to replicate theoretical and experimental results reported in previous studies1,51,52. In these studies, several excitatory synapses were activated sequentially in time, on a stretch of a basal dendrite, either in the soma-to-dendrites (OUT) direction or vice versa (the IN direction). Rall showed that the shape and size of the resultant composite somatic EPSP depended strongly on the spatio-temporal order of synaptic activation; it was always larger and more delayed for the centripetal (dendrites-to-soma) than for the centrifugal (soma-to-dendrites) sequence of synaptic activation (this difference can serve to compute the direction of motion51). It was shown that the difference in the resulting somatic voltage peak between these two spatio-temporal sequences of synaptic activation was enhanced when nonlinear NMDA-dependent synapses were involved and that it made it possible to discriminate between complex patterns of dendritic activation52.
To simulate these phenomena, 12 excitatory synapses were placed along one basal branch in the detailed model (red dots on the green basal tree, Fig. 6a). At first, the synapses only had an AMPA component. The synapses were activated in temporal order from the tip to the soma (IN, cyan traces) or from the soma to the tip (OUT, blue traces, see Methods for details). As predicted by Rall, activation in the IN direction resulted in larger and delayed somatic EPSP (cyan trace versus the blue trace in Fig. 6b). Neuron_Reduce merged these 12 synapses into five point processes along the respective cylinder (Fig. 6d). We repeated the same experiment in the reduced model and found that the EPSP resulting from the IN direction was larger and delayed, with a similar EPSP waveform to that of the detailed model (Fig. 6e; see also Supplementary Fig. 2 and Discussion). Next, an NMDA component was added to the 12 simulated synapses; this resulted in larger somatic EPSP amplitudes in both directions (and both models) and a smaller difference in the peak timing between the different directions in both the detailed and the reduced models (comparison between Fig. 6c–f).
To generalize the impact of the spatio-temporal order of synaptic activation, we used a directionality index suggested in a previous study52. This measure estimates how different a given synaptic sequence is from the IN sequence by calculating the number of synaptic swaps needed to convert this given pattern into the IN pattern (using the bubble-sort algorithm, see Methods). We tested the EPSPs that resulted from different temporal combinations of synaptic activation (each having a different directionality index), both without (Fig. 6g) and with an NMDA component (Fig. 6i). The peak somatic EPSP in the reduced model (red dots) was larger than in the respective detailed model (black dots), both for the AMPA-only case (by 1.71 ± 0.43 mV; mean ± SD) and for the AMPA + NMDA case (by 4.80 ± 0.74 mV); see Supplementary Fig. 1. Nevertheless, the behavior of the two models was similar when the somatic voltage in the two models was subtracted by the peak value obtained in the OUT direction (Fig. 6h, j). Then, the difference between the reduced and the detailed models was, on average, only 0.11 ± 0.43 mV for the AMPA-only case and 0.35 ± 0.74 mV for the AMPA + NMDA case. Thus, although the detailed and the reduced models differ to a certain extent (see Discussion), the capability of the reduced model to discriminate between spatio-temporal patterns of synaptic activation is similar to that of the detailed model.
Neuron-Reduce applied successfully on a variety of neurons
We next tested the utility of Neuron_Reduce on 13 different neuron models from different brain regions (Fig. 7). Four models were obtained from the Blue Brain database17,53: L6 tufted pyramidal cell, L4 double bouquet cell, L4 spiny stellate cell, and L5 Martinotti cell, all from the rat somatosensory cortex. Two additional models were obtained from the Allen Institute cell-type database11: an L4 spiny cell and an L1 aspiny cell from the mouse visual cortex. Medium spiny neuron from the mouse basal ganglia54; two rat thalamocortical neurons55; Golgi cell from mouse cerebellar cortex; and one inhibitory hippocampal neuron from the rat56. We also took two additional neuron models from our laboratory: rat L2/3 large basket cell57 and a model of a human L2/3 pyramidal cell from the temporal cortex58. All these models were based on 3D reconstructions and were constrained by experimental recordings (see Supplementary Table 2 for details on the various neuron models and input parameters).
Neuron_Reduce successfully generated a reduced model for all these different cell types, with highly faithful response properties in all cases (Fig. 7). Three examples with their respective morphologies for the detailed and reduced models are shown in Fig. 7a–c. For a given input, we measured the spiking activity of the detailed and reduced models (Fig. 7d–f) and calculated the corresponding SPIKE-synchronization values. For the L6 tufted PC model (Fig. 7a, d), the L2/3 large basket cell model (Fig. 7b, e), and the L4 double bouquet model (Fig. 7c, f), the SPIKE-synchronization values were 0.74, 0.85, and 0.91, respectively, for 50-s-long simulations (only 2 s are shown in Fig. 7d–f). The SPIKE-synchronization values for additional inputs, and for the other 10 neuron models and their corresponding reduced models, are shown in Fig. 7g. We have also tested the performance of Neuron_Reduce and the variability of the SPIKE-synchronization measure using eight neocortical neuron types, with 11 cell models per type taken from the Blue Brain cells dataset17,53. Supplementary Fig. 6 shows that, for all cells, the SPIKE-synchronization measure remains similar to that found in Fig. 7 with mean values per cell type ranging between 0.43 and 0.86. Additionally, as in Fig. 7, it increased with the output frequency of the modeled cell.
Neuron_Reduce is a new tool for simplifying complex neuron models while enhancing their simulation run-time. It analytically maps the detailed tree into a reduced multi-cylindrical tree, based on Rall's cable theory and linear circuit theory (Fig. 1). The underpinning of the reduction algorithm is that it preserves the magnitude of the transfer impedance \(| {Z_{0,j}\left( \omega \right)} |\) from each dendritic location, j, to the soma (the dendro-somatic direction, Eqs. (1)–(11) in Methods). Since in linear systems it holds that \(| {Z_{0,j}(\omega )} | = | {Z_{j,0}(\omega )} |\), for passive dendritic trees it also preserves the transfer impedance in the soma-to-dendritic direction (e.g., current injection at the soma will result in the same voltage response at the respective sites in the detailed and reduced models59).
Note that dendritic voltage transients (e.g., synaptic potentials) contain a range of frequencies, ω. We however had to select one frequency to use for the mapping of the detailed-to-the-reduced tree. Consequently, we examined a whole range of possible ω values for this mapping. Conveniently, we found that ω = 0 is the preferred frequency for generating the reduced model (namely, when the mapping from detailed-to-the-reduced model is performed based on the transfer resistance \(| {Z_{0,j}\left( {\omega = 0} \right)} | = | {R_{0,j}} |\), see Supplementary Fig. 7). This result is actually not surprising; Rinzel and Rall33 showed that, in passive trees and current-based synapses, the attenuation of the voltage time integral (the area below the EPSPs) is identical to the attenuation of steady-state voltage. In other words, when using the transfer resistance for our mapping procedure, we preserved the total charge transfer (which in our case, was proportional to the voltage time integral) from the synapse to the soma (and vice versa), but not, for example, the EPSP peak value.
Neuron_Reduce was proven to be accurate in replicating voltage dynamics and spike timing for a large regime of input parameters and a variety of neuron types (Fig. 7, Supplementary Fig. 6, and Supplementary Table 2). This claim is based on using several metrics for assessing the quality of the performance of the reduced model (Supplementary Fig. 3). Neuron_Reduce is straightforward to use, it is fast, and generally applicable, thus enabling its implementation on any neuron morphology with any number (even tens of thousands) of synapses. One key advantage of Neuron_Reduce is that it retains the identity of individual dendrites and synapses and that it maps dendritic nonlinearities to their respective loci in the reduced model, hence preserving local excitable dendritic phenomena and therefore maintaining nonlinear dendritic computations. Neuron_Reduce also preserves the passive cable properties (Rm, Ra, and Cm) of the detailed model, thus preserving synaptic integration and other temporal aspects of the detailed model. Neuron_Reduce can also be applied for reducing cells connected with gap junctions. As Neuron_Reduce preserves the transfer resistance from the location of the synapses (in this case the gap junction) to the soma and vice versa, one expects that the coupling coefficient between the two connected cells will be preserved in the reduced models, after mapping the gap junction to its appropriate location in the reduced model.
Neuron_Reduce enhances the computational speed by a factor of up to several hundred folds, depending on the simulated morphology and the number of simulated synapses (Fig. 3 and Supplementary Table 1). This combination of capabilities, together with its user-friendly documentation and its public availability, make Neuron_Reduce a promising method for the community of neuronal modelers and computational neuroscientists, and for the growing community interested in "biophysical deep learning."
For a large number of synapses and complex morphologies, the run-time of Neuron_Reduce models can be accelerated by up to 250-fold as compared to their respective detailed models (Fig. 3 and Supplementary Table 1). This is achieved in two associated steps. First, the algorithm reduces the number of compartments of the neuron model; for example, for the reconstructed tree in Fig. 1, it reduced the number of compartments from 642 to 50. Then, synapses (and ion channels) that are mapped to the same electrical compartment in the reduced tree (because they have similar transfer resistance to the soma) are merged into one point process in NEURON. Each of these steps on its own has a relatively small effect on the run-time. However, when combined, a large (supralinear) improvement in the computational speed is achieved (Supplementary Table 1). This is because at each time step, NEURON computes both the voltage in each electrical compartment as well as the currents and states of each point process and membrane mechanism (synapses and conductances). Reducing the number of compartments in a model decreases the number of equations to be solved and the number of synapses to be simulated (due to the reduced number of compartments, a larger number of synapses are merged together). Importantly, merging synapses preserves the activation time of each synapse. Note, however, that in its present state, Neuron_Reduce cannot merge synapses with different kinetics.
Several other reduction methods for single neurons have been proposed over the years12,34,35,36,37,38,39,41. Most are not based on an analytic underpinning and thus require hand-tuning of the respective biophysical and morphological parameters. In addition, most of these methods have not been examined using realistic numbers of dendritic synapses and are incapable of systematic incorporation of dendritic nonlinearities. In most cases, their accuracy has not been assessed for a range of neuron types (but see ref. 41). Many of these methods are not well documented, thus making it hard to compare them directly with Neuron_Reduce. Nevertheless, we did compare the performance of Neuron_Reduce to two other reduction methods and showed that it outperformed them (Supplementary Fig. 5).
It should be noted that although the transfer impedance from a given dendritic locus to the soma is preserved in the reduced model, the input impedance at that locus is not preserved (is lower) in the reduced model. Consequently, the conditions for evoking local dendritic events, and the fine details of these events are not identical in the detailed and the reduced models (e.g., compare Fig. 5a, b to Fig. 5c, d and see Fig. 4). Indeed, if there were highly local dendritic Na+ spikes (as in ref. 60), then Neuron_Reduce will not capture them, as this local dendritic spike will be averaged out in the respective lumped cable. Similarly, because the local voltage response to a current injection in the dendrite depends on the dendritic impedance, the local synaptic responses are somewhat different in the detailed versus the reduced cases, especially when voltage-gated ion channels (such as NMDA-dependent synaptic channels) are involved. In fact, when large dendritic NMDA signals are involved, the resultant somatic EPSPs are expected to be different in the detailed as compared to the reduced model, as is the case in Figs. 2 and 6. Indeed, if one insists on preserving highly local nonlinear dendritic events, then the full dendritic tree should be modeled.
Despite these local differences, the reduced model for L5PC did generate a local dendritic Ca2+ spike in the cylinder representing the apical dendrite and was able to perform an input classification task (enhanced by NMDA conductance), as in the detailed tree (Figs. 5 and 6). Moreover, when embedded in large circuits, individual neurons are likely to receive semi-random dendritic input, rather than a clustered input on specific dendrites. For such inputs, the reduced models generated by Neuron_Reduce capture most of the statistics of the membrane voltage dynamics as in the detailed model (Figs. 2 and 7 and Supplementary Figs. 4 and 6).
The next straightforward step is to use Neuron_Reduce to simplify all the neurons composing a large neural network model, such as the Blue Brain Project17 and the in silico models by Egger et al.16 and by Billeh et al.61. By preserving the connectivity and reducing the complexity of the neuronal models, the reduced models will make it possible to run much longer simulations and/or larger neuronal networks, while faithfully preserving the I/O of each neuron. Such long simulations are critical for reproducing long-term processes such as circuit evolution and structural and functional plasticity.
Neuron_Reduce algorithm and its implementation in NEURON
Neuron_Reduce maps each original stem dendrite to a unique single cylinder with both ends sealed. This cylinder preserves the specific passive cable properties (Rm, Cm, and Ra) of the original tree as well as both the transfer impedance from the electrotonically most distal dendritic tip to the soma and the input resistance at the soma end of the corresponding stem dendrite (when disconnected from the soma). For a sinusoidal angular frequency ω > 0, the transfer impedance Zi,j(ω) is the ratio between the Fourier transform of the voltage at point (i) and the Fourier transform of the sinusoidal current injected into the injection point (j) (note that in passive systems, Zi,j(ω) = Zj,i(ω)). This ratio is a complex number; its magnitude (|Zi,j(ω)|) is the ratio (in Ω) between the peak voltage response and the amplitude of the injected current. In a short cylindrical cable with sealed ends and electrotonic length L, the transfer impedance, Z0,X(ω), between the somatic end of the cylinder (X = 0) and any location X is33,43,62
$$Z_{0,X}\left( \omega \right) = \frac{{R_\infty }}{q}\frac{{{\mathrm{cosh}}\left( {q\left( {L - X} \right)} \right)}}{{{\mathrm{sinh}}(qL)}},$$
$$R_\infty = \frac{2}{\pi }\frac{{\sqrt {R_{\mathrm{m}}R_{\mathrm{a}}} }}{{d^{3/2}}}$$
$$q = \sqrt {1 + i\omega \tau },$$
where τ is the membrane time constant, RmCm.
From Eq. (1), the input impedance at X = 0 is
$$Z_{0,0}\left( \omega \right) = \frac{{R_\infty }}{q}{\mathrm{coth}}(qL).$$
We next want a cylindrical cable of electrotonic length L, in which both \(| {Z_{0,L}\left( \omega \right)} |\) and \(| {Z_{0,0}\left( \omega \right)} |\) are identical to those measured in the respective original stem dendrite (Fig. 1). For this purpose, we first look for an L value in which the ratio \(| {Z_{0,L}\left( \omega \right)} |/| {Z_{0,0}\left( \omega \right)} |\) is preserved. Dividing Eq. (1) by Eq. (4), we get
$$\frac{{Z_{0,X}\left( \omega \right)}}{{Z_{0,0}\left( \omega \right)}} = \frac{{\cosh \left( {q(L - X)} \right)}}{{\cosh \left( {qL} \right)}},$$
which can be expressed as
$$\frac{{Z_{0,X}\left( \omega \right)}}{{Z_{0,0}\left( \omega \right)}} = \frac{{{\mathrm{cosh}}\left( {a\left( {L - X} \right) + ib\left( {L - X} \right)} \right)}}{{{\mathrm{cosh}}(aL + ibL)}} = {M{\mathrm{exp}}}(i\phi ),$$
where a and b are the real and the imaginary parts of q, respectively, and M and ϕ are the modulus and phase angle of this complex ratio.
As shown previously62, it follows that
$${M} = \frac{{| {Z_{0,X}\left( \omega \right)} |}}{{| {Z_{0,0}\left( \omega \right)} |}} = \left[ {\frac{{\cosh \left( {2a\left( {L - X} \right)} \right) + \cos \left( {2b\left( {L - X} \right)} \right)}}{{\cosh \left( {2aL} \right) + \cos \left( {2bL} \right)}}} \right]^{0.5}$$
$$\phi = \arctan \left[ {\tanh \left( {a\left( {L - X} \right)} \right)\tan \left( {b\left( {L - X} \right)} \right)} \right] - \arctan \left[ {\tanh \left( {aL} \right)\tan \left( {bL} \right)} \right].$$
Importantly, for a fixed M (and a given ω) there is a unique value of L that satisfies Eq. (7) (see Fig. 4 in ref. 62 and note the-one-to-one mapping between M and L for a given ω value). However, there are an infinite number of cylindrical cables (with different diameters and lengths) that have identical L values preserving a given M value in Eq. (7).
We next need a unique cable, with a specific diameter d, that also preserves the measured value of |Z0,0(ω)| (and therefore it also preserves |Z0,L(ω)|, see Eq. (7)).
From Eqs. (2) and (4) we get
$$Z_{0,0}\left( \omega \right) = \frac{2}{{\pi q}}\frac{{\sqrt {R_{\mathrm{m}}R_{\mathrm{a}}} }}{{d^{3/2}}}{\mathrm{coth}}(qL)$$
and thus
$$| {Z_{0,0}\left( \omega \right)} | = \left| {\frac{2}{{\pi q}}\frac{{\sqrt {R_{\mathrm{m}}R_{\mathrm{a}}} }}{{d^{3/2}}}{\mathrm{coth}}(qL)} \right|$$
from which we compute the diameter, d, for that cylinder
$$| d | = \left| {\left( {\frac{2}{\pi }\frac{{\sqrt {R_{\mathrm{m}}R_{\mathrm{a}}} }}{{qZ_{0,0}\left( \omega \right)}}\coth \left( {qL} \right)} \right)^{2/3}} \right|.$$
Equations (1)–(11) provide the unique cylindrical cable (with a specific d and L, and the given membrane and axial properties) that preserves the values of \(| {Z_{0,L}\left( \omega \right)} |\) and \(| {Z_{0,0}\left( \omega \right)} |\) as in the respective stem dendrite. Note that this unique cable does not necessarily preserve the phase ratio (ϕ in Eq. (8)) as in the original tree.
Practically, in order to transform each original stem dendrite (with fixed Rm, Ra, and Cm values) into a corresponding unique cylindrical cable, we proceeded as follows. First, on each modeled stem dendrite (when isolated from the soma), we searched for a distal location x with minimal transfer impedance, \(| {Z_{0,x}\left( \omega \right)} |\), from that particular x to the soma. This location provided the smallest M value for this particular stem dendrite. This distal dendritic locus, x, was mapped to the distal end, X = L, of the corresponding cylinder. We then used Eqs. (1)–(11) to calculate the unique respective cylinder for each stem dendrite.
In order to map synapses from the detailed model to the reduced one, we computed, for each synapse at location j in the detailed model, \(| {Z_{0,j}\left( \omega \right)} |\), and then mapped this synapse to the respective location y in the reduced model, such that \(| {Z_{0,y}\left( \omega \right)} | = | {Z_{0,j}\left( \omega \right)} |\). This reduced model is then compartmentalized into segments (typically with spatial resolution of 0.1λ, see Fig. 3b). We then merged all synapses with identical kinetics and reversal potential, that are mapped to a particular segment, onto a single-point process object in NEURON (Fig. 1, Step B). These synapses retain their original activation time and biophysical properties through the connection of each of their respective original NetStim objects to the single-point process that represents them all (each of these connections was mediated by the synapse's original NetCon object). As shown in Supplementary Table 1, this step dramatically reduced the running time of the model. We note that all the results presented in this study were obtained using ω = 0 in Eqs. (1)–(11), since running the same simulations with ω = 0 provided the best performance (see Supplementary Fig. 7). However, ω is a parameter in the algorithm code and can be modified by the user. Note also that \(| {Z_{0,0}\left( \omega \right)} |\), \(| {Z_{0,j}\left( \omega \right)} |\), and \(| {Z_{0,L}\left( \omega \right)} |\) were analytically computed for each original stem dendrite using the NEURON impedance tool63.
Neuron models used in the present study
To estimate the accuracy of the reduction method, we ran 50-s simulations of cell morphologies of different types, in both the reduced and detailed models (see also Supplementary Fig. 6). Models of 13 neurons were used in this study; their details are available in Supplementary Table 2. For each of the models, we distributed 1,250–10,000 synapses on their dendritic trees. Eighty percent of the synapses were excitatory, and the rest were inhibitory. The synaptic conductances were modeled using known two-state kinetic synaptic models17. For simplicity, we did not include synaptic facilitation or depression. All models had one type of γ-aminobutyric acid type A (GABAA)-based inhibitory synapses and either AMPA- or AMPA + NMDA-based excitatory synapses. The synaptic rise and decay time constants were taken from various works cited in Supplementary Table 2. When no data were available, we used the default parameters of the Blue Brain Project synaptic models17,53. Inhibitory synapses were activated at 10 Hz, whereas the activation rate of the excitatory synapses was varied to generate different output firing rates in the range of 1–20 Hz (Figs. 2–4, 7 and Supplementary Figs. 3–7); the values used for each model are listed in Supplementary Table 2. In all models except Supplementary Fig. 4, synaptic activation time was randomly sampled from a homogenous Poisson process. In Supplementary Fig. 4a, b the activation time was sampled from an inhomogeneous Poisson process with a time-dependent intensity \(\lambda \left( t \right) = r \, \ast \, {\mathrm{sin}}\left( {t \, \ast f \ast 2\pi } \right) + 1\), where t is time in s, r is the firing rate of the synapse, and f is the oscillation frequency.
In Supplementary Fig. 4c, we extracted a single-layer five thick-tufted pyramidal cell with an early bifurcating apical tuft (L5_TTC2; gid 75586) from active blue brain microcircuit17 with calcium and potassium concentration of 1.23 and 5.0 mM, respectively. The synaptic activation from the microcircuit was replayed to this detailed model and also to its respective reduced model. Synaptic depression and facilitation were disabled, and the synapse time constants, which varied in the microcircuit, were set to their mean value (the decay time constant was set to 1.74 and 8.68 ms for AMPA and GABAA, respectively; the rise time constant for GABAA was set to 4.58 ms); all other variables were as in the blue brain simulations.
Estimating the accuracy of the reduced models
Cross-correlations were calculated between the spike trains of the detailed and the reduced models. The window size was 500 ms, and the bin size was 1 ms. The resulting cross-correlations were normalized by the number of spikes in the detailed model (Fig. 2c). ISIs were binned in windows of 21 ms to create the ISI distribution in Fig. 2d.
SPIKE-synchronization measure is a parameter- and scale-free method that quantifies the degree of synchrony between two spike trains44. SPIKE-synchronization uses the relative number of quasi-simultaneous appearances of spikes in the spike trains. In this study, we used the Python implementation of this method64. To allow comparison to the literature, Supplementary Fig. 3 depicts three additional metrics against which to compare the performance of the detailed and the reduced models: Trace accuracy39, ISI distance44, and Γ coincidence factor65.
Comparison to other reduction algorithms
We compared Neuron_Reduce to two classical reduction algorithms (Supplementary Fig. 5):
Equivalent cable using the d3/2 rule for reduction. Rall and Rinzel32 and Rinzel and Rall33 showed that for idealized passive dendritic trees, the entire dendritic tree can be collapsed to a single equivalent cylinder that is analytically identical (from the point of view of the soma) to the detailed tree. However, neurons do not have ideal dendritic trees, mostly because dendritic terminations typically occur at different electrotonic distances from the soma. Nevertheless, it is possible to collapse any dendritic tree using a similar mapping (Rall's "d3/2 rule") as in the idealized tree; this will provide an "equivalent cable" (rather than an "equivalent cylinder") with a varying diameter for the whole dendritic tree (see details in Rall et al.48). The electrotonic distances to the soma of synapses and nonlinear dendritic mechanisms were computed in the original model and then each synapse and mechanism was mapped to the corresponding segment in the "equivalent cable" preserving the electrotonic distance to the soma as in the original tree.
Mapping all synapses to the soma. Another recent reduction scheme was proposed where all dendritic synapses are mapped, after implementing cable filtering for each synapse, to the somatic compartment34. Here we used a modified version of this method. We used Neuron_Reduce to generate a multi-cylindrical model of the cell as in Fig. 1b. Then, all the synapses in the original tree were mapped to the model soma. To account for dendritic filtering, for each synapse, we multiplied the original synaptic conductance, gsyn, by the steady-state voltage attenuation factor from the original dendritic location, j, of the synapse to the soma. Specifically,
$$g_{{\mathrm{syn}}}^ \ast = g_{{\mathrm{syn}}} \ast \frac{{| {Z_{0,j}} |}}{{| {Z_{0,0}} |}} = g_{{\mathrm{syn}}} \ast \frac{{V_{0,j}}}{{V_{0,0}}},$$
where \(g_{{\mathrm{syn}}}^ \ast\) is the new synaptic weight for synapse j when placed at the soma of the reduced model.
Spatio-temporal patterns of synaptic activation
In Fig. 6, 12 synapses, placed at 25 µm intervals, were distributed on a stretch of one basal dendrite. The peak AMPA conductance per synapse was 5 nS. In cases where the synapses also had an NMDA component, the NMDA-based peak conductance was 3.55 nS. The synapses were activated in a specific temporal order with a time delay of 3.66 ms between them. This resulted in an input velocity of 7 µm/s for the sequential IN and OUT patterns in Fig. 6. In addition, the temporal order of synaptic activation was randomized and scored according to the directionality index52, which sums the number of swaps used by the bubble-sort algorithm to sort a specific temporal pattern into the IN pattern. In this measure, an IN pattern is attributed the value of 0 (no swaps) and the OUT pattern the value of 67 (67 swaps in bubble sort are required to "sort" the OUT pattern into the IN pattern52).
All simulations were performed using NEURON 7.4–7.720 running on the Blue Brain V supercomputer based on HPE SGI 8600 platform hosted at the Swiss National Computing Center in Lugano, Switzerland. Each compute node was composed of an Intel Xeon 6140 CPUs @2.3 GHz and 384 GB DRAM. Analysis and simulation were conducted using Python and visualization using Matplotlib66.
The Neuron_Reduce algorithm is publicly available on GitHub (http://github.com/orena1/neuron_reduce).
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
All spike times and somatic membrane potentials presented in the article are available upon request.
The Neuron_Reduce algorithm and most of the models that were used in the paper are publicly available on GitHub (http://github.com/orena1/neuron_reduce). An interactive example is available as a live paper (https://humanbrainproject.github.io/hbp-bsp-live-papers/2020/amsalem_et_al_2020/amsalem_et_al_2020.html). Software used for visualization of neurons in Fig. 7 is available at https://github.com/BlueBrain/RTNeuron.
Rall, W. in Neural Theory Model (ed. Reiss, R. F.) 73–97 (Stanford University Press, Palo Alto, 1964).
Rall, W. Distinguishing theoretical synaptic potentials computed for different soma-dendritic distributions of synaptic input. J. Neurophysiol. 30, 1138–1168 (1967).
Rapp, M., Yarom, Y. & Segev, I. Modeling back propagating action potential in weakly excitable dendrites of neocortical pyramidal cells. Proc. Natl Acad. Sci. USA 93, 11985–11990 (1996).
Larkum, M. E., Nevian, T., Sandler, M., Polsky, A. & Schiller, J. Synaptic integration in tuft dendrites of layer 5 pyramidal neurons: a new unifying principle. Science 325, 756–760 (2009).
Hay, E., Hill, S., Schürmann, F., Markram, H. & Segev, I. Models of neocortical layer 5b pyramidal cells capturing a wide range of dendritic and perisomatic active properties. PLoS Comput. Biol. 7, e1002107 (2011).
Almog, M. & Korngreen, A. A quantitative description of dendritic conductances and its application to dendritic excitation in layer 5 pyramidal neurons. J. Neurosci. 34, 182–196 (2014).
Segev, I. Single neurone models: oversimple, complex and reduced. Trends Neurosci. 15, 414–421 (1992).
Stuart, G. & Spruston, N. Determinants of voltage attenuation in neocortical pyramidal neuron dendrites. J. Neurosci. 18, 3501–3510 (1998).
Magee, J. C. & Cook, E. P. Somatic EPSP amplitude is independent of synapse location in hippocampal pyramidal neurons. Nat. Neurosci. 3, 895 (2000).
Poirazi, P., Brannon, T. & Mel, B. W. Arithmetic of subthreshold synaptic summation in a model CA1 pyramidal cell. Neuron 37, 977–987 (2003).
Gouwens, N. W. et al. Systematic generation of biophysically detailed models for diverse cortical neuron types. Nat. Commun. 9, 710 (2018).
Bahl, A., Stemmler, M. B., Herz, A. V. M. & Roth, A. Automated optimization of a reduced layer 5 pyramidal cell model based on experimental data. J. Neurosci. Methods 210, 22–34 (2012).
Migliore, M., Hoffman, D. A., Magee, J. C. & Johnston, D. Role of an A-type K+ conductance in the back-propagation of action potentials in the dendrites of hippocampal pyramidal neurons. J. Comput. Neurosci. 7, 5–15 (1999).
Segev, I. & London, M. A theoretical view of passive and active dendrites. Dendrites 376, xxi (1999).
Eyal, G. et al. Human cortical pyramidal neurons: from spines to spikes via models. Front. Cell. Neurosci. 12, 181 (2018).
Egger, R., Dercksen, V. J., Udvary, D., Hege, H.-C. & Oberlaender, M. Generation of dense statistical connectomes from sparse morphological data. Front. Neuroanat. 8, 129 (2014).
Markram, H. et al. Reconstruction and simulation of neocortical microcircuitry. Cell 163, 456–492 (2015).
Hawrylycz, M. et al. Inferring cortical function in the mouse visual system through large-scale systems neuroscience. Proc. Natl Acad. Sci. USA 113, 7337–7344 (2016).
Arkhipov, A. et al. Visual physiology of the layer 4 cortical circuit in silico. PLoS Comput. Biol. https://doi.org/10.1371/journal.pcbi.1006535 (2018).
Carnevale, N. T. & Hines, M. L. The NEURON Book (Cambridge University Press, Cambridge, 2006).
Bower, J. M. in The Book of Genesis 195–201 (Springer, New York, 1998).
Gleeson, P., Steuber, V. & Silver, R. A. neuroConstruct: a tool for modeling networks of neurons in 3D space. Neuron 54, 219–235 (2007).
Davison, A. P. PyNN: a common interface for neuronal network simulators. Front. Neuroinform. https://doi.org/10.3389/neuro.11.011.2008 (2008).
Gratiy, S. L. et al. BioNet: a Python interface to NEURON for modeling large-scale networks. PLoS ONE 13, e0201630 (2018).
Kozloski, J. & Wagner, J. An ultrascalable solution to large-scale neural tissue simulation. Front. Neuroinform. 5, 15 (2011).
Dura-Bernal, S. et al. NetPyNE, a tool for data-driven multiscale modeling of brain circuits. eLife https://doi.org/10.7554/eLife.44494 (2019).
Cantarelli, M. et al. Geppetto: a reusable modular open platform for exploring neuroscience data and models. Philos. Trans. R. Soc. Ser. B 373, 20170380 (2018).
Van Geit, W. et al. BluePyOpt: leveraging open source software and cloud infrastructure to optimise model parameters in neuroscience. Front. Neuroinform. 10, 17 (2016).
Schemmel, J., Fieres, J. & Meier, K. in IJCNN 2008 (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on Neural Networks 431–438 (IEEE, 2008).
Aamir, S. A., Muller, P., Hartel, A., Schemmel, J. & Meier, K. A highly tunable 65-nm CMOS LIF neuron for a large scale neuromorphic system. in ESSCIRC Conference 2016: 42nd European Solid-State Circuits Conference 71–74 (IEEE, 2016). https://doi.org/10.1109/ESSCIRC.2016.7598245.
Rall, W. Electrophysiology of a dendritic neuron model. Biophys. J. 2, 145–167 (1962).
Rall, W. & Rinzel, J. Branch input resistance and steady attenuation for input to one branch of a dendritic neuron model. Biophys. J. 13, 648–687 (1973).
Rinzel, J. & Rall, W. Transient response in a dendritic neuron model for current injected at one branch. Biophys. J. 14, 759–790 (1974).
Rössert, C. et al. Automated Point-Neuron Simplification of Data-Driven Microcircuit Models (2016). http://arXiv.org/1604.00087.
Stratford, K., Mason, A., Larkman, A., Major, G. & Jack, J. in The Computing Neuron (eds. Durbin, R., Miall, C. & Mitchison, G.) 296–321 (Addison-Wesley Longman Publishing Co., Inc. 1989).
Destexhe, A. Simplified models of neocortical pyramidal cells preserving somatodendritic voltage attenuation. Neurocomputing 38–40, 167–173 (2001).
Hendrickson, E. B., Edgerton, J. R. & Jaeger, D. The capabilities and limitations of conductance-based compartmental neuron models with reduced branched or unbranched morphologies and active dendrites. J. Comput. Neurosci. 30, 301–321 (2011).
Bush, P. C. & Sejnowski, T. J. Reduced compartmental models of neocortical pyramidal cells. J. Neurosci. Methods 46, 159–166 (1993).
Marasco, A., Limongiello, A. & Migliore, M. Fast and accurate low-dimensional reduction of biophysically detailed neuron models. Sci. Rep. 2, 928 (2012).
Hao, J., Wang, X.-d, Dan, Y., Poo, M.-m & Zhang, X.-h An arithmetic rule for spatial summation of excitatory and inhibitory inputs in pyramidal neurons. Proc. Natl Acad. Sci. USA 106, 21906–21911 (2009).
Marasco, A. et al. Using Strahler's analysis to reduce up to 200-fold the run time of realistic neuron models. Sci. Rep. 3, 2934 (2013).
Brown, S. A., Moraru, I. I., Schaff, J. C. & Loew, L. M. Virtual NEURON: a strategy for merged biochemical and electrophysiological modeling. J. Comput. Neurosci. 31, 385–400 (2011).
Koch, C. Biophysics of Computation: Information Processing in Single Neurons (Oxford University Press, 1999).
Kreuz, T., Mulansky, M. & Bozanic, N. SPIKY: a graphical user interface for monitoring spike train synchrony. J. Neurophysiol. 113, 3432–3445 (2015).
Kreuz, T., Bozanic, N. & Mulansky, M. SPIKE—synchronization: a parameter-free and time-resolved coincidence detector with an intuitive multivariate extension. BMC Neurosci. 16, P170 (2015).
Kreuz, T. Measures of spike train synchrony. Scholarpedia 6, 11934 (2011).
Satuvuori, E. & Kreuz, T. Which spike train distance is most suitable for distinguishing rate and temporal coding? J. Neurosci. Methods 299, 22–33 (2018).
Rall, W. et al. Matching dendritic neuron models to experimental data. Physiol. Rev. 72, S159–S186 (1992).
Parnas, I. & Segev, I. A mathematical model for conduction of action potentials along bifurcating axons. J. Physiol. 295, 323–343 (1979).
Larkum, M. E., Zhu, J. J. & Sakmann, B. A new cellular mechanism for coupling inputs arriving at different cortical layers. Nature 398, 338–341 (1999).
Anderson, J. C., Binzegger, T., Kahana, O., Martin, K. A. C. & Segev, I. Dendritic asymmetry cannot account for directional responses of neurons in visual cortex. Nat. Neurosci. 2, 820 (1999).
Branco, T., Clark, B. A. & Häusser, M. Dendritic discrimination of temporal input sequences in cortical neurons. Science 329, 1671–1675 (2010).
Ramaswamy, S. et al. The neocortical microcircuit collaboration portal: a resource for rat somatosensory cortex. Front. Neural Circuits 9, 44 (2015).
Lindroos, R. et al. Basal ganglia neuromodulation over multiple temporal and structural scales—simulations of direct pathway MSNs investigate the fast onset of dopaminergic effects and predict the role of Kv4.2. Front. Neural Circuits https://doi.org/10.3389/fncir.2018.00003 (2018).
Iavarone, E. et al. Experimentally-constrained biophysical models of tonic and burst firing modes in thalamocortical neurons. PLoS Comput. Biol. https://doi.org/10.1371/journal.pcbi.1006753 (2019).
Migliore, R. et al. The physiological variability of channel density in hippocampal CA1 pyramidal cells and interneurons explored using a unified data-driven modeling workflow. PLoS Comput. Biol. https://doi.org/10.1371/journal.pcbi.1006423 (2018).
Amsalem, O., Van Geit, W., Muller, E., Markram, H. & Segev, I. From neuron biophysics to orientation selectivity in electrically coupled networks of neocortical L2/3 large basket cells. Cereb. Cortex 26, 3655–3668 (2016).
Eyal, G. et al. Unique membrane properties and enhanced signal processing in human neocortical neurons. Elife 5, e16553 (2016).
Koch, C., Poggio, T. & Torres, V. Retinal ganglion cells: a functional interpretation of dendritic morphology. Philos. Trans. R. Soc. Ser. B 298, 227–263 (1982).
Smith, S. L., Smith, I. T., Branco, T. & Häusser, M. Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo. Nature 503, 115–120 (2013).
Billeh, Y. N. et al. Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex. SSRN Electron. J. https://doi.org/10.2139/ssrn.3416643 (2019).
Rall, W. & Segev, I. in Voltage and Patch Clamping with Microelectrodes 191–215 (Springer, 2013). https://doi.org/10.1007/978-1-4614-7601-6_9.
Carnevale, N. T., Tsai, K. Y., Claiborne, B. J. & Brown, T. H. in Advances in Neural Information Processing Systems 7th edn (eds. Tesauro, G., Touretzky, D. S. & Leen, T. K.) 69–76 (MIT Press, 1995).
Mulansky, M. & Kreuz, T. PySpike—a Python library for analyzing spike train synchrony. SoftwareX 5, 183–189 (2016).
Jolivet, R., Lewis, T. J. & Gerstner, W. Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy. J. Neurophysiol. 92, 959–976 (2004).
Hunter, J. D. Matplotlib: a 2D graphics environment. Comput. Sci. Eng. 9, 90–95 (2007).
We thank Gal Eliraz for her early work on the reduction method and Mickey London for advising us along this project. This study received funding from the European Union's Horizon 2020 Framework Program for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2), the ETH domain for the Blue Brain Project, the Gatsby Charitable Foundation, and the NIH Grant Agreement U01MH114812.
Department of Neurobiology, Hebrew University of Jerusalem, 9190401, Jerusalem, Israel
Oren Amsalem
, Guy Eyal
, Noa Rogozinski
& Idan Segev
Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202, Geneva, Switzerland
Michael Gevaert
, Pramod Kumbhar
& Felix Schürmann
Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, 9190401, Jerusalem, Israel
Idan Segev
Search for Oren Amsalem in:
Search for Guy Eyal in:
Search for Noa Rogozinski in:
Search for Michael Gevaert in:
Search for Pramod Kumbhar in:
Search for Felix Schürmann in:
Search for Idan Segev in:
I.S. proposed the principle theoretical idea for the Neuron_Reduce scheme. I.S., O.A., G.E. and N.R. extended the original idea, planned, and designed the study. O.A., G.E. and N.R. implemented the Neuron_Reduce simulations. F.S. and P.K. assisted with the detailed benchmarking of Neuron_Reduce. M.G. helped refactor the tool to increase its usability, maintainability, and comprehensibility. All authors wrote the manuscript.
Correspondence to Oren Amsalem.
Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Peer Review File
Amsalem, O., Eyal, G., Rogozinski, N. et al. An efficient analytical reduction of detailed nonlinear neuron models. Nat Commun 11, 288 (2020). https://doi.org/10.1038/s41467-019-13932-6
Nature Communications menu
Editors' Highlights | CommonCrawl |
Wu* , Li , and Liu: A Hierarchical Bilateral-Diffusion Architecture for Color Image Encryption
Volume 18, No 1 (2022), pp. 59 - 74
Menglong Wu* , Yan Li and Wenkai Liu
A Hierarchical Bilateral-Diffusion Architecture for Color Image Encryption
Abstract: During the last decade, the security of digital images has received considerable attention in various multimedia transmission schemes. However, many current cryptosystems tend to adopt a single-layer permutation or diffusion algorithm, resulting in inadequate security. A hierarchical bilateral diffusion architecture for color image encryption is proposed in response to this issue, based on a hyperchaotic system and DNA sequence operation. Primarily, two hyperchaotic systems are adopted and combined with cipher matrixes generation algorithm to overcome exhaustive attacks. Further, the proposed architecture involves designing pixel-permutation, pixel-diffusion, and DNA (deoxyribonucleic acid) based block-diffusion algorithm, considering system security and transmission efficiency. The pixel-permutation aims to reduce the correlation of adjacent pixels and provide excellent initial conditions for subsequent diffusion procedures, while the diffusion architecture confuses the image matrix in a bilateral direction with ultra-low power consumption. The proposed system achieves preferable number of pixel change rate (NPCR) and unified average changing intensity (UACI) of 99.61% and 33.46%, and a lower encryption time of 3.30 seconds, which performs better than some current image encryption algorithms. The simulated results and security analysis demonstrate that the proposed mechanism can resist various potential attacks with comparatively low computational time consumption.
Keywords: Bilateral-Diffusion , Color Image Encryption , DNA Sequence Operation , Hyperchaotic System
With the rapid development of communication and networks, multimedia information transmission between various devices has put forward higher requirements. Due to the rapid development of transmission mechanisms, the consequent security issues have attracted substantial attention in industry and academia. However, since the digital image has the characters of high correlation between adjacent pixels, some current encryption algorithms, such as Data Encryption Standard (DES), Advanced Encryption Standard (AES), and Rivest-Shamir-Adleman (RSA) algorithm, are not suitable for image encryption [1-5]. Hence, it is vital to enhance the efficiency of image encryption by pursuing other security technologies.
Currently, chaotic systems are very suitable for cryptosystems and have become a major trend in many image encryption algorithms [6,7]. Its particular attributes, including high aperiodicity and randomicity and its great sensitivity to initial values, make it adapt to image encryption, and a large number of research achievements are proposed in this area [8-10]. So far, however, chaotic systems can only be applied to produce pseudo-random number sequences for image encryption areas, indicating that other efficient encryption algorithms or techniques are required to confuse the plain image greatly. Meanwhile, since Adleman [11] first introduced DNA computing into the experiment in 1994, a new image encryption method, called the DNA method, has been widely used. DNA-based image encryption methods are known for their great superiorities, such as large parallelism, high storage, and low energy consumption, making it very appropriate to encrypt images with high redundancy information [12-15]. In addition, it is worth noting that encrypting images with the DNA computing method under a single DNA encoding rule will bring a large number of equivalent computations and leads to insufficient system security. Therefore, the combination of the chaotic system and DNA operation aroused great interest in image encryption, which reveals superior experimental results, and plenty of scholars are continuing to work on this subject [16-21].
Nevertheless, some current image encryption algorithms with DNA operation and chaotic systems are found not powerful enough to resist differential attacks. For example, Liu et.al [14] proposed a novel color image encryption algorithm based on dynamic DNA and 4D chaotic system. However, the algorithm suffered an imperfect unified average changing intensity (UACI) of 33.06%, 30.59% and 27.60% for each R, G, B channel, since the confusion process merely relies on XOR operation and the diffusion process is executed in groups. Li et al. [21] proposed a color image encryption algorithm using a fractional-order 4D hyperchaotic system and DNA sequence operations. However, although the algorithm involves the utilization of DNA encoding, DNA complementary and DNA addition operation, the system security still needs to be improved. To be specific, the information entropy of cipher image is 7.9973 in G channel and 7.9967 in B channel under the proposed algorithm, yet many of the image encryption algorithms realized better information entropy which is larger than 7.9990. Meanwhile, the UACI of the proposed algorithm in the R channel is 33.2483%, which is not close enough to the theoretical value 33.4635%.
Based on the above discussion, this paper proposed a hierarchical bilateral-diffusion architecture for color image encryption based on two hyperchaotic systems and DNA sequence operations. Primarily, several cipher matrixes are generated with high sensitivity to resist exhaustive attacks by compounding and weighting the hyper-chaotic Lorenz system and Chen's hyper-chaotic system. Then, the pixel-permutation algorithm is performed to disorder the plain image and provide preferable conditions for subsequent diffusion procedures. Next, a forward pixel-diffusion algorithm is designed to alter each pixel value of the image matrix. After, DNA based bilateral block-diffusion algorithm is proceeded for block-level diffusion with ultra-low power consumption, under the utilization of DNA encoding, DNA decoding, DNA complementary operation, and DNA algebraic operation. Finally, the backward pixel-diffusion is executed to enhance the system security further and obtain the cipher image. The experimental results illustrated in Section 4 and Section 5 verified the practicability and security of the proposed image encryption mechanism with a strong ability to against statistical attacks, differential attacks, and brute-force attacks, as well as low computational time consumption.
The rest of the paper is organized as follows. The preliminary work including chaotic systems and DNA sequence operation are illustrated in Section 2. The proposed hierarchical bilateral-diffusion archi¬tecture for color image encryption is described in Section 3. The simulation results and computational speed analysis are exhibited in Section 4. The security analysis is given in Section 5. Finally, the con¬clusion is given in Section 6.
2. Preliminary Work
2.1 Chaotic System
Chaotic systems are widely applied in various image encryption algorithms due to its superior peculiarities, aperiodicity, randomicity, and its great sensitivity to initial values. In the proposed system, two 4-D hyperchaotic systems, hyper-chaotic Lorenz system and Chen's hyper-chaotic system, are involved for an even better randomicity. The hyper-chaotic Lorenz system is described in Eq. (1), where a,b,c,r are four control parameters. When [TeX:] $$a=10, b=8 / 3, c=28 \text { and }-1.52 \leq r \leq-0.06,$$ the system is under the state of hyperchaotic. Meanwhile, the Chen's hyper-chaotic system is defined in Eq. (2). The five control parameters, a,b,c,d,k are set as [TeX:] $$a=36, b=3, c=28, d=-16 \text { and }-0.7 \leq k \leq 0.7$$ to guarantee the system is under the state of hyperchaotic.
[TeX:] $$\left\{\begin{array}{c} \dot{x}=a(y-x)+w \\ \dot{y}=c x-y-x z \\ z=x y-b z \\ w=-y z+r w \end{array}\right.$$
[TeX:] $$\left\{\begin{array}{c} \dot{x}=a(y-x) \\ \dot{y}=-x z+d x+c y-q \\ \dot{z}=x y-b z \\ \dot{q}=x+k \end{array}\right.$$
2.2 DNA Sequence Operation
A DNA sequence consists of four basic nucleic acids, namely, A (adenine), C (cytosine), G (guanine), and T (thymine). The DNA sequences are supposed to satisfy the Watson-Crick complement rules, where A and T are complementary, C and G are complementary. Based on this, the binary sequences and DNA sequences can be converted mutually under the DNA encoding/decoding rule defined in Table 1. As shown in Table 1, there are 8 encoding/decoding rules in total, and each DNA code is represented with 2 bits binary sequence: 00, 01, 10 or 11.
The DNA encoding/decoding rules
00 A A C G C G T T
01 C G A A T T C G
10 G C T T A A C G
11 T T G C G C A A
DNA complementary and algebraic operations are the core processes of DNA-based encryption algorithms. Since DNA complementary operation abides by the single mapping principle, the DNA complementary rules must satisfy the formulas defined in Eq. (3), where [TeX:] $$B\left(x_{i}\right)$$ is a base pair of [TeX:] $$x_{i}$$ and they are complementary.
[TeX:] $$\left\{\begin{array}{l} x_{i} \neq B\left(x_{i}\right) \neq B\left(B\left(x_{i}\right)\right) \neq B\left(B\left(B\left(x_{i}\right)\right)\right) \\ x_{i}=B\left(B\left(B\left(B\left(x_{i}\right)\right)\right)\right) \end{array}\right.$$
Based on Eq. (3), there are 6 available DNA complementary rules, as shown below:
[TeX:] $$\begin{aligned} &(A T)(T C)(C G)(G A),(A T)(T G)(G C)(C A),(A C)(C T)(T G)(G A), \\ &(A C)(C G)(G T)(T A),(A G)(G T)(T C)(C A),(A G)(G C)(C T)(T A) \end{aligned}$$
DNA algebraic operation is based on traditional binary algebraic operations, such as, addition, subtraction, XOR and XNOR. According to DNA encoding rule 1, the definitions of DNA addition, subtraction, XOR and XNOR operations based on two DNA codes are given in Table 2.
The DNA algebraic rules of addition, subtraction, XOR and XNOR operations
XOR [TeX:] $$(\oplus)$$
XNOR [TeX:] $$(\odot)$$
A A G C T A T C G A G C T T C G A
G G C T A G A T C G A T C C T A G
C C T A G C G A T C T A G G A T C
T T A G C T C G A T C G A A G C T
3. Proposed Methodology
In this paper, a hierarchical bilateral-diffusion architecture is proposed for color image encryption, utilizing the hyperchaotic system and DNA operation. The concrete design of the proposed mechanism consists of three main parts: (1) cipher matrixes generation, (2) pixel-permutation and pixel-diffusion algorithm, and (3) bilateral block-diffusion algorithm based on DNA sequence operation, which will be further discussed as follows.
3.1 Cipher Matrixes Generation
In all cryptosystems, to overcome exhaustive attacks, the cipher keys are required to be long and sensitive enough. In this paper, since ten cipher matrixes are required for image encryption and decryption, two 4D hyperchaotic systems, the hyper-chaotic Lorenz system and Chen's hyper-chaotic system, are utilized to satisfy the requirements of high randomicity and sensitivity. The design concept of the cipher matrixes generation is shown in Fig. 1, which can work by iterating the chaotic sequences with two hyperchaotic systems and integrating the intermediate variables into cipher matrixes through a specific generation algorithm. Suppose the plain image is in size of [TeX:] $$M \times N$$ and the number of DNA blocks is t, the whole ten cipher matrixes can be generated by the following steps.
The design concept of cipher matrixes generation.
Step 1: Take four initial values [TeX:] $$x_{0}, y_{0}, z_{0}, w_{0}$$ as inputs of hyper-chaotic Lorenz system with control parameters [TeX:] $$r_{1}, r_{2}$$ and iterate for [TeX:] $$r_{1}+r_{2}$$ times to gain four chaotic values, denoted as [TeX:] $$x^{\prime}, y^{\prime}, z^{\prime}, w^{\prime}.$$
Step 2: Plus four initial values [TeX:] $$m_{0}, n_{0}, p_{0}, k_{0}$$ with chaotic values [TeX:] $$x^{\prime}, y^{\prime}, z^{\prime}, w^{\prime}$$ and take them as inputs of Chen's hyper-chaotic system with control parameters [TeX:] $$t_{1}, t_{2}, t_{3}.$$ Then, iterate them for [TeX:] $$t_{1}+t_{2}+t_{3}$$ times.
Step 3: Continually put the obtained chaotic values into hyper-chaotic Lorenz system and Chen's hyper-chaotic system in sequence, where the intermediate sequences of hyper-chaotic Lorenz system is denoted as [TeX:] $$x_{\mathrm{i}}, y_{\mathrm{i}}, z_{\mathrm{i}}, w_{\mathrm{i}}$$ and the intermediate sequences of Chen's hyper-chaotic system is denoted as [TeX:] $$m_{\mathrm{i}}, n_{\mathrm{i}}, p_{\mathrm{i}}, k_{\mathrm{i}}.$$
Step 4: Iterate the chaotic sequences for 500+8MN times and generate four cipher matrixes in size of [TeX:] $$M \times 4 N$$ under the definitions in Eqs. (5)–(8).
[TeX:] $$A_{i, j}=\left(\frac{1-r_{1}}{t_{1}+t_{2}} \times x_{(2 i+1) \times N-(i+1) \times j}+\frac{r_{1} r_{2}}{t_{2}+t_{3}} \times m_{(2 i+1) \times N-(i+1) \times j}\right) \times 10^{14}, \bmod 256$$
[TeX:] $$B_{i, j}=\left(\frac{r_{1} r_{2}}{t_{1}+t_{3}} \times y_{(2 i+1) \times N-(i+1) \times j}+\frac{1-r_{2}}{t_{1}+t_{2}} \times n_{(2 i+1) \times N-(i+1) \times j}\right) \times 10^{14}, \bmod 256$$
[TeX:] $$C_{i, j}=\left(\frac{1-r_{2}}{t_{1}+t_{2}} \times z_{(2 i+1) \times N-(i+1) \times j}+\frac{1-r_{1}}{t_{2}+t_{3}} \times p_{(2 i+1) \times N-(i+1) \times j}\right) \times 10^{14}, \bmod 256$$
[TeX:] $$D_{i, j}=\left(\frac{1-r_{2}}{t_{2}+t_{3}} \times w_{(2 i+1) \times N-(i+1) \times j}+\frac{r_{1} r_{2}}{t_{1}+t_{3}} \times k_{(2 i+1) \times N-(i+1) \times j}\right) \times 10^{14}, \bmod 8$$
Step 5: Divide [TeX:] $$A_{M \times 4 N}$$ into four pixel-permutation cipher matrixes, [TeX:] $$X_{M \times N}, Y_{M \times N}, Z_{M \times N}, H_{M \times N};$$ and then Reshape [TeX:] $$B_{M \times 4 N} \text { and } C_{M \times 4 N}$$ into two pixel-diffusion cipher matrixes with the size of [TeX:] $$2 M \times 2 N,$$ denoted as [TeX:] $$U_{2 M \times 2 N} \text { and } V_{2 M \times 2 N} ; \text { Divide } D_{M \times 4 N}$$ into DNA encoding cipher matrix [TeX:] $$E_{1 \times t},$$ DNA decoding cipher matrix [TeX:] $$F_{1 \times \mathrm{t}},$$ DNA complementary cipher matrix [TeX:] $$S_{1 \times \mathrm{t}}$$ and DNA algebraic cipher matrix [TeX:] $$T_{1 \times t}.$$
Step 6: Execute modulus operation to [TeX:] $$S_{1 \times \mathrm{t}} \text { and } T_{1 \times \mathrm{t}} \text {, where } S_{1 \times \mathrm{t}}$$ is set in the range of [0,5] and [TeX:] $$T_{1 \times t}$$ is set in the range of [0,3].
3.2 Pixel-Permutation and Pixel-Diffusion Algorithm
Permutation and diffusion are two vital operations in the image encryption algorithm. In the designed system, a pixel-permutation algorithm is used to confuse the order of pixels in the plain image, and the pixel-diffusion algorithm is used to alter the subsequent pixels values of the image matrix, which will be further combined with DNA based block-diffusion algorithm for an impressive performance on statistical attack and differential attack.
The designed pixel-permutation algorithm acts as the primal confusing method to the plain image, which exchanges pixels in coordinate (i,j) and (m,n) through cipher matrixes [TeX:] $$X_{M \times N}, Y_{M \times N}, Z_{M \times N},H_{M \times N}.$$ The exchanging rules of the two pixels are defined in Eqs. (9) and (10). There, all the pixel values will be entirely scrambled by repetitively executing the algorithm from coordinate (1,1) to (M,N).
[TeX:] $$m=-\ln \frac{1}{\sqrt{X_{i, j}+2}} \times Y_{i, j}+e^{\sqrt{Z_{i, j}+100}}+H_{i, j}, \bmod M$$
[TeX:] $$n=-\ln \frac{1}{\sqrt{Z_{i, j}+2}} \times H_{i, j}+e^{\sqrt{X_{i, j}+100}}+Y_{i, j}, \bmod N$$
On the other hand, pixel-diffusion algorithms are considered the vital role of image encryption against differential attack, since even single-pixel can greatly influence others. Therefore, two pixel-diffusion algorithms are introduced to the system for better performance, pixel-diffusion I and pixel-diffusion II, which scan the image matrix in the opposite direction. Specifically, the pixel-diffusion I algorithm is a forward-diffusion algorithm that starts from the first element and alters the remaining pixels value in the sequence. Suppose the input image matrix is [TeX:] $$P_{M \times N},$$ the obtained image matrix after pixel-diffusion I algorithm can be calculated by Eq. (11), denoted as [TeX:] $$I_{M \times N} \text {, where } U_{2 M \times 2 N}$$ is the cipher matrix and a,b are two cipher keys. The diffusion operation is accomplished by executing the algorithm from coordinate (1,1) to (M,N).
[TeX:] $$\left\{\begin{array}{l} I_{1,1}=P_{1,1} \oplus U_{a, b} \oplus U_{b, a} \\ I_{1, j}=P_{1, j} \oplus U_{1, j+a} \oplus I_{1, j-1} \\ I_{i, 1}=P_{i, 1} \oplus U_{i+b, j} \oplus I_{i-1,1} \\ I_{i, j}=P_{i, j} \oplus U_{i+a, j+b} \oplus I_{i-1, j} \oplus I_{i, j-1} \end{array}\right.$$
On the contrary, the pixel-diffusion II algorithm is a backward-diffusion algorithm that starts from the last element and alters the remaining pixels value in the negative direction. Similarly, suppose the input image matrix is [TeX:] $$P_{M \times N},$$ and the obtained image matrix [TeX:] $$I_{M \times N}$$ can be calculated by Eq. (12), where [TeX:] $$V_{2 M \times 2 N}$$ is the cipher matrix and c,d are two cipher keys. The diffusion operation is accomplished by executing the algorithm from coordinate (M,N) to (1,1).
[TeX:] $$\left\{\begin{array}{l} I_{M, N}=P_{M, N} \oplus V_{c, d} \oplus V_{d, c} \\ I_{M, j}=P_{M, j} \oplus V_{M, j+c} \oplus I_{M, j+1} \\ I_{i, N}=P_{i, N} \oplus V_{i+d, N} \oplus I_{i+1, N} \\ I_{i, j}=P_{i, j} \oplus V_{i+c, j+d} \oplus I_{i+1, j} \oplus I_{i, j+1} \end{array}\right.$$
3.3 Bilateral Block-Diffusion Algorithm based on DNA Operation
As a matter of fact, confusing the original image with only XOR-based diffusion algorithm is seen as a deficient method when encountering strongly differential attacks. Therefore, this paper presents a bilateral block-diffusion algorithm for better sensitivity, utilizing DNA encoding, DNA decoding, DNA complementary operation and DNA algebraic operation. Under the peculiar diffusion structure of forward pixel-diffusion, DNA based bilateral block-diffusion and backward pixel-diffusion, the image encryption mechanism shows an impressive ability to overcome statistical attacks and differential attacks.
The proposed block-diffusion algorithm is constituted by forward DNA diffusion algorithm and backward DNA diffusion algorithm. It diffuses the image by two bilateral directions, where the forward diffusion starts from block 1 to block t and backward diffusion starts from block t to block 1. DNA complementary operation and DNA algebraic operation are the core operations of DNA diffusion, which acts on each current DNA block with the previous DNA block. For instance, if the current block is block i, then the previous block of forward DNA diffusion is block i-1, and the previous block of backward DNA diffusion is block i+1.
The adopted DNA complementary operation is a substitution operation with iteration, and the results are determined by the DNA complementary rules in Eq. (4). Suppose the current DNA block is denoted as K, the previous DNA block is denoted as J, and the block after DNA complementary operation is denoted as R. Then, the calculation principle of an intact DNA complementary process can be illustrated as follow:
[TeX:] $$R_{i, j}=\left\{\begin{array}{l} K_{i, j}, \text { if } J_{i, j}=A \\ B\left(K_{i, j}\right), \text { if } J_{i, j}=T \\ B\left(B\left(K_{i, j}\right)\right), \text { if } J_{i, j}=C \\ B\left(B\left(B\left(K_{i, j}\right)\right)\right), \text { if } J_{i, j}=G \end{array}\right.$$
where [TeX:] $$K_{i, j}, J_{i, j}, R_{i, j}$$ are the elements in block K,J,R respectively, and [TeX:] $$B\left(K_{i, j}\right)$$ represents a one-time complementary operation to [TeX:] $$K_{i, j}.$$ In the designed algorithm, cipher matrix [TeX:] $$S_{1 \times \mathrm{t}}$$ is required to select the complementary rule, and the elements in block J are used to decide the iterations of complementary substitution.
On the other hand, the adopted DNA algebraic operation is a subsequent process to DNA complementary operation, which alters the elements in block R with block J. The DNA algebraic rule is decided by the elements in the cipher matrix [TeX:] $$T_{1 \times \mathrm{t}}, \text { where } T_{i, j}=0$$ represents addition operation, [TeX:] $$T_{i, j}=1$$ represents subtraction operation, [TeX:] $$T_{i, j}=2$$ represents XOR operation and [TeX:] $$T_{i, j}=3$$ represents XNOR operation.
3.4 The Intact Steps of Proposed Image Encryption Mechanism
The proposed encryption mechanism for a color image is described in Fig. 2, involving one pixel-permutation algorithm, two pixel-diffusion algorithms and the DNA based bilateral block-diffusion algorithm. The complete image encryption steps are illustrated as follows.
The constitution of the proposed color image encryption mechanism.
Step 1: Input cipher keys [TeX:] $$x_{0}, y_{0}, z_{0}, w_{0}, m_{0}, n_{0}, p_{0}, k_{0}, r_{1}, r_{2}, t_{1}, t_{2}, t_{3}$$ to gain ten cipher matrixes.
Step 2: Separate the plain image into R, G, B channel matrixes in the size of [TeX:] $$M \times N.$$
Step 3: Execute pixel-permutation algorithm to the image matrix from coordinate (1,1) to (M,N), through cipher matrixes [TeX:] $$X_{M \times N}, Y_{M \times N}, Z_{M \times N}, H_{M \times N};$$ Then, run pixel-diffusion I algorithm to the obtained matrix from coordinate (1,1) to (M,N), through cipher matrix [TeX:] $$U_{2 M \times 2 N}$$ and cipher keys a,b.
Step 4: Proceed the DNA based forward block-diffusion algorithm as follows: partition the image matrix into t blocks and convert into DNA blocks under the encoding rules defined in matrix E_(1×t); keep the first DNA block unchanged; For each of the remaining t-1 blocks, make DNA complementary operation and DNA algebraic operation with the previous block sequentially, based on cipher matrixes [TeX:] $$S_{1 \times \mathrm{t}} \text { and } T_{1 \times \mathrm{t}};$$ then, convert all the DNA blocks into a decimal matrix with the decoding rules defined in matrix [TeX:] $$\mathrm{F}_{1 \times \mathrm{t}}.$$
Step 5: Similarly, proceed the DNA based backward block-diffusion algorithm as follows: partition and convert the matrix into t DNA blocks with matrix [TeX:] $$\mathrm{E}_{1 \times \mathrm{t}},$$ and remain block t unchanged; from block t-1 to block 1, execute DNA complementary operation and DNA algebraic operation based on cipher matrixes [TeX:] $$S_{1 \times \mathrm{t}} \text { and } T_{1 \times \mathrm{t}};$$ convert the DNA blocks into a decimal matrix with matrix [TeX:] $$\mathrm{F}_{1 \times \mathrm{t}}.$$
Step 6: From coordinate (M,N) to (1,1), convert the matrix with pixel-diffusion II algorithm through cipher matrix [TeX:] $$V_{2 M \times 2 N}$$ and cipher keys c,d in sequence.
Step 7: For each component matrix of R, G, B channel, loop execute the transformation from step 3 to step 6. Then, merge the three-channel matrixes into one final cipher image.
4. Experimental Results and Computational Speed Analysis
4.1 Experimental Results of Image Encryption and Decryption
The experimental of the proposed image encryption mechanism is tested in this part. Four color images are utilized to test the encryption and decryption performance, named "Lena" in the size of [TeX:] $$512 \times 512 \times 3,$$ "street" in the size of [TeX:] $$1280 \times 720 \times 3,$$ "waterfall" in the size of [TeX:] $$800 \times 600 \times 3,$$ and "mountain" in the size of [TeX:] $$1024 \times 680 \times 3,$$ as shown in Fig. 3(a)–3(d), respectively. Meanwhile the experiments are implemented using MATLAB R2018b on a computer with Intel Core i5 processor and 8 GB RAM in Windows 10 operating system.
The initial value and other control parameters are described in Table 3, where [TeX:] $$x_{0}, y_{0}, z_{0}, w_{0}$$ are four initial value of hyper-chaotic Lorenz system, [TeX:] $$m_{0}, n_{0}, p_{0}, k_{0}$$ are four initial value of Chen's hyper-chaotic system, [TeX:] $$r_{1}, r_{2}, t_{1}, t_{2}, t_{3}$$ are the control parameters of the two chaotic systems and a,b,c,d are the parameters of two pixel-diffusion algorithms. The encryption for the four plain images is shown in Fig. 3. It can be seen from the experimental results that the cipher images are unrecognizable and shown no correlation to the plain images. Besides, the cipher images are all decrypted successfully. Hence, the feasibility of the proposed image encryption algorithm has been verified.
Specific parameters of the simulation
Chaotic systems [TeX:] $$\begin{gathered} x_{0}=26.7837, y_{0}=-8.9013, z_{0}=59.4985, w_{0}=-247.3752, \\ m_{0}=37.5082, n_{0}=-16.3594, p_{0}=69.4987, k_{0}=137.4891, \\ r_{1}=58, r_{2}=231, t_{1}=48, t_{2}=9, t_{3}=168 \end{gathered}$$
Pixel-diffusion algorithms a=78,b=215,c=97,d=38
Experimental results of image encryption for different color images. (a–d) Four plain images. (e–h) The corresponding cipher images. (i–l) The corresponding decrypted images.
4.2 Computational Complexity Analysis
The computational complexity can influence the computational speed greatly for both encryption and decryption processes. Therefore, to verify whether the proposed encryption mechanism is suitable enough for color image encryption, this section gives the time complexity of the encryption process for theoretical support and analysis, as follows. Firstly, the input color image with the size of [TeX:] $$M \times N$$ is separated into three pixel-matrixes, so the time complexity is [TeX:] $$O(3 \times M \times N).$$ The pixel-permutation process exchanges the pixel value in each pixel-matrix of three channels, so the time complexity is also [TeX:] $$O(3 \times M \times N).$$ The pixel-diffusion process alters the pixel value in the three pixel-matrixes, so the time complexity for pixel-diffusion I and pixel-diffusion II algorithm are both [TeX:] $$O(3 \times M \times N).$$ For DNA based block-diffusion process, the time complexity for DNA encode/decode is [TeX:] $$O(12 \times M \times N),$$ since 4 DNA codes are needed to represent one pixel value. Similarly, the time complexity for DNA complementary and algebraic operation is also [TeX:] $$O(12 \times M \times N),$$ since the operations are all based on the DNA codes after DNA encodes. Thus, the overall time complexity of the proposed image encryption mechanism is [TeX:] $$O(12 \times M \times N).$$
4.3 Computational Speed Performance
Computational time is a significant indicator in the image encryption area. For a well-designed image encryption algorithm, the computational time is required to be as low as possible when guaranteeing system security. In this part, the image encryption algorithm is executed 100 times to gain the average encryption time of 3.30 seconds, with an image size of [TeX:] $$512 \times 512 \times 3.$$ Since the bilateral-diffusion architecture only involves low computational operations such as addition, subtraction, XOR, and XNOR operation, the proposed algorithm has shown an excellent speed performance of image encryption. Besides, the design of DNA-based block diffusion also saved plenty of computing time and achieved better encryption efficiency. Table 4 gives the experimental results and compares them with other algorithms [2,4,13,22] in computational times, which indicates that the proposed algorithm has achieved better computational speed performance.
Execution time under the proposed method and the comparison with other algorithms
Execution time (s)
Proposed method 3.30
Kang and Guo [2] 9.0016
Shakiba [4] 13.9
Rehman et al. [13] 5.47
Wu et al. [22] 3.76
5. Security Analysis
5.1 Histogram Analysis
The image histogram can directly reveal the distribution of the pixel values. The histograms of plain image and cipher image in R, G, B channels are given in Fig. 4. It can be noticed that the histograms of plain images are tended to some specific shapes, and the pixel values are concentrated to several fixed intervals, while the histograms of cipher images show their excellent uniformity. Hence, it is exceedingly difficult to extract the related information and recover the plain image. Therefore, the proposed algorithm can overcome the statistical attacks from histogram analysis.
The image histograms of (a–c) plain image and (d–f) cipher image in R, G, B channel, respectively.
5.2 Correlation Analysis
Correlation analysis is another statistical attack, which intends to search the relation of adjacent pixels in the cipher images and recover the plain images. Since the correlations of adjacent pixels in plain images are usually very large in horizontal, vertical, and diagonal directions, reducing the correlation of adjacent pixels in the image encryption algorithm is vital. In this part, 10000 pairs of adjacent pixels are selected randomly to evaluate the correlations between plain and cipher images. Fig. 5 gives the distribution diagrams of adjacent pixels in plain image and cipher image. It shows that the correlations of the adjacent pixels in the plain image are very strong, which the distribution is very close to a straight line. Meanwhile, the distribution diagrams of the cipher image are tended to be very disordered, which indicates that the correlation of cipher image is very little.
Further, the correlation coefficient acts as a vital measurement criterion to quantify the correlation of images. Eqs. (14)–(17) given the formulas of the correlation coefficient, where x,y are adjacent pixel values and N is the sample size.
[TeX:] $$r_{x y}=\frac{\operatorname{cov}(x, y)}{\sqrt{D(x)} \sqrt{D(y)}}$$
[TeX:] $$\operatorname{cov}(x, y)=\frac{1}{N} \sum_{i=1}^{N}\left(x_{i}-E(x)\right)\left(y_{i}-E(y)\right)$$
[TeX:] $$E(x)=\frac{1}{N} \sum_{i-1}^{N} x_{i}$$
[TeX:] $$D(x)=\frac{1}{N} \sum_{i=1}^{N}\left(x_{i}-E(x)\right)^{2}$$
The distribution of adjacent pixels in plain image and cipher image. (a–c) Horizontal, vertical, and diagonal distribution of plain image. (d–f) Horizontal, vertical, and diagonal distribution of cipher image.
The experimental results of the correlation coefficient in plain image and cipher image are shown in Table 5. The correlation coefficients are supposed to be very close to theoretical value zero, and our test results are satisfied with the demands. Table 5 also lists the comparison with other image encryption algorithms [5,13,14,20]. All the experimental results are very close to zero, which shows that the proposed algorithm has a great ability to against statistical attacks.
Correlation coefficient under the proposed method and the comparison with other algorithms
Correlation coefficient
Lena 0.9900 0.9807 0.9719
Proposed method 0.0036 0.0068 -0.0029
Wu et al. [5] -0.0080 0.0098 -0.0058
Rehman et al. [13] -0.0238 -0.0013 0.0006
Liu et al. [14] 0.0011 -0.0013 -0.0019
Chai et al. [20] -0.0027 0.0033 -0.0035
5.3 Information Entropy Analysis
The information entropy can measure the uncertainty of an information source, which can be calculated by Eq. (18).
[TeX:] $$H(x)=-\sum_{i=0}^{L-1} p_{i} \log _{2} p_{i}$$
Hence, the uncertainty of the cipher image can be evaluated by calculating the image information entropy. The comparison with other algorithms is given in Table 6 [1,2,14,21]. It can be noticed that all the experimental results are very close to theoretical value 8 in each channel component, which indicates that the cipher image is of great randomness to resist entropy attack.
Information entropy under the proposed method and comparison with other algorithms
Information entropy
R channel
G channel
B channel
Proposed method 7.9993 7.9992 7.9993
Arpaci et al. [1] 7.9949 7.9945 7.9941
Kang and Guo [2] 7.9980 7.9979 7.9978
Liu et al. [14] 7.9992 7.9993 7.9994
Li et al. [21] 7.9991 7.9973 7.9967
5.4 Differential Attack Analysis
The differential attack is a strong attack in which the attackers may change the pixel values in plain images and seek the difference between the cipher images to crack the cipher keys. The number of pixel change rate (NPCR) and UACI are two test indexes to evaluate whether the encryption algorithm can resist differential attacks. They are defined in Eqs. (19)–(21), where P(i,j) and C(i,j) are the pixel values of two cipher images in the size of [TeX:] $$M \times N.$$
[TeX:] $$N P C R=\frac{\sum_{i=0}^{M-1} \sum_{j=0}^{N-1} D(i, j)}{M \times N} \times 100 \%$$
[TeX:] $$D(i, j)= \begin{cases}1 & \text { if } P(i, j) \neq C(i, j) \\ 0 & \text { else }\end{cases}$$
[TeX:] $$U A C I=\frac{\sum_{i=0}^{M-1} \sum_{j=0}^{N-1} \frac{|P(i, j)-C(i, j)|}{255}}{M \times N} \times 100 \%$$
In this part, two plain images with one-pixel value difference are selected for the differential attack test, and the experimental results are given in Table 7. The theoretical value of NPCR and UACI are 99.6094% and 33.4635%, respectively. The comparisons with other algorithms are also given in Table 7 [2,14,21]. It can be noticed that our algorithm has the best performance in both NPCR and UACI tests compared to other image encryption algorithms. Therefore, our algorithm has a strong ability to resist the differential attack.
Experimental results (%) and the comparison with other algorithms of differential attack
NPCR
UACI
Proposed method 99.61 33.49 99.62 33.43 99.60 33.47
Kang and Guo [2] 99.65 33.46 99.65 33.47 99.65 33.44
Liu et al. [14] 99.60 33.06 99.59 30.59 99.64 27.60
Li et al. [21] 99.60 33.25 99.62 33.50 99.61 33.39
5.5 Cipher Key Analysis
5.5.1 Key space analysis
As a well-designed image encryption algorithm, the key space should be larger than [TeX:] $$2^{100}$$ to resist brute-force attacks. The cipher key of the proposed system is constituted by four initial values of hyper-chaotic Lorenz system, [TeX:] $$x_{0}, y_{0}, z_{0}, w_{0},$$ four initial values of Chen's hyper-chaotic system, [TeX:] $$m_{0}, n_{0}, p_{0}, k_{0},$$ five control parameters in the chaotic systems, [TeX:] $$r_{1}, r_{2}, t_{1}, t_{2}, t_{3}$$ and four parameters of pixel-diffusion algorithms, a,b,c,d. The initial values [TeX:] $$x_{0}, y_{0}, m_{0}, n_{0}$$ are in the range of (-40,40), [TeX:] $$z_{0}, p_{0}$$ are in the range of (1,81), [TeX:] $$w_{0}, k_{0}$$ are in the range of −250,250. All the initial values are floating numbers with the accuracy of [TeX:] $$10^{-14}.$$ The control parameters [TeX:] $$r_{1}, r_{2}, t_{1}, t_{2}, t_{3} \text { and } a, b, c, d$$ are integers in the range of [0,255]. Hence, the key space of the proposed algorithm can be calculated by Eq. (22).
[TeX:] $$(80)^{6} \times(500)^{2} \times\left(10^{14}\right)^{8} \times(256)^{9} \cong 3.0948501 \times 10^{150} \cong 2^{500}$$
The comparisons with other existing algorithms [1,5,21,23,24] are as shown in Table 8. It is noticed that only the key space in [21] is larger than ours and the key space in our scheme is much larger than the theoretical value [TeX:] $$2^{100}.$$ Hence, it indicates that our algorithm has a strong ability to against the brute-force attacks.
The key space of the proposed method and the comparison with other algorithms
Key space
Theoretical value
Proposed method [TeX:] $$2^{500} \approx 10^{150}$$ [TeX:] $$>2^{100}$$
Arpaci et al. [1] [TeX:] $$2^{232}$$ [TeX:] $$>2^{100}$$
Wu et al. [5] [TeX:] $$2^{349}$$ [TeX:] $$>2^{100}$$
Li al. [21] [TeX:] $$2^{576}$$ [TeX:] $$>2^{100}$$
Hraoui et al. [23] [TeX:] $$>2^{100}$$ [TeX:] $$>2^{100}$$
Wu et al. [24] [TeX:] $$10^{117}$$ [TeX:] $$>2^{100}$$
5.5.2 Key sensitivity analysis
As a well-designed cryptosystem, the cipher key is supposed to be very sensitive for a slight altering. In this part, the key sensitivity is tested by changing the parameters slightly when decryption to seek the difference between the correctly decrypted image and the parameters changed decrypted images. The experimental results are as shown in Fig. 6, where Fig. 6(a) is a decrypted image with correct cipher key, while Fig. 6(b)–6(e) are decrypted images with slightly changed cipher keys. It can be noticed from Fig. 6 that all the changed cipher keys are failed in image recovery and show no correlation with the plain image. Therefore, the proposed system has a high key sensitivity against blind decryption.
Experimental results of the key sensitivity test: (a) decrypted image with the correct key, (b) decrypted image with [TeX:] $$x_{0}=26.7836,$$ (c) decrypted image with [TeX:] $$m_{0}=37.5081,$$ (d) decrypted image with [TeX:] $$r_{1}=59,$$ and (e) decrypted image with 𝑎=77.
This paper proposes a hierarchical bilateral diffusion architecture for color image encryption based on the hyperchaotic system and DNA sequence operation. To against exhaustive attacks, hyper-chaotic Lorenz system and Chen's hyper-chaotic system are compounded and weighted to generate cipher matrixes, which results in highly key sensitive with a key space of [TeX:] $$2^{500}.$$ On the other hand, to guarantee system security without sacrificing too much computational time, this paper innovated the encryption mechanism with hierarchical bilateral diffusion architecture to resist various potential attacks. Under the specific diffusion architecture of forward pixel-diffusion, DNA based block-diffusion, and backward pixel-diffusion, the plain image shows excellent ability against statistical attacks and differential attacks. Specifically, our scheme reduces the correlation coefficient of plain image from 0.9809 to 0.0044 and increases the information entropy of plain image from 7.2736 to 7.9993 to overcome statistical attacks. Also, the experimental results of differential attacks indicate that the NPCR and UACI can reach 99.61% and 33.46% on average, which is very close to the theoretical value 99.6094% and 33.4635%. Meanwhile, the computational time consumption under the proposed image encryption scheme is relatively lower than some current image encryption algorithms, costing only 3.30 seconds to encrypt one image. All the experimental results demonstrate that the proposed mechanism is highly efficient in encrypting images with prominent characteristics, such as large key space, stronger key sensitivity, low time consumption, and superior ability to resist varieties of typical attacks. Thus, the proposed diffusion architecture provides an effective method for color image encryption, with expansive application prospects including wireless communication, network data transmission, and big data storage. In the future, this work may achieve even better performance and broader application scenarios under the combination of various image processing techniques, such as image steganography and image compression.
This paper is funded by Beijing Natural Science Foundation - Haidian Original Innovation Joint Fund Project (No. L182039).
Menglong Wu
He received his Ph.D. in communications and information systems from Beijing University of Posts and Telecommunications. He is currently an associate professor of Information Science and Technology at the North China University of Technology. He mainly engaged in optical communication.
She received B.S. degree in Electronic and Information Engineering from the North China University of Technology in 2018. She is currently a graduate student in Electronics and Communication Engineering from the North China University of Technology since 2018.
Wenkai Liu
He graduated from the Institute of Semiconductors, Chinese Academy of Sciences as a Ph.D. candidate in 2002. He is currently a professor of Information Science and Technology at the North China University of Technology. He mainly engaged in optical communication, photonic devices and photonic integration.
1 B. Arpaci, E. Kurt, K. Celik, "A new algorithm for the colored image encryption via the modified Chua's circuit," Engineering Science and Technology: An International Journal, vol. 23, no. 3, pp. 595-604, 2020.custom:[[[-]]]
2 X. Kang, Z. Guo, "A new color image encryption scheme based on DNA encoding and spatiotemporal chaotic system," Signal Processing: Image Communication2020, vol. 80, no. 115670, 2019.doi:[[[10.1016/j.image..115670]]]
3 A. Ur Rehman, D. Xiao, A. Kulsoom, M. A. Hashmi, S. A. Abbas, "Block mode image encryption technique using two-fold operations based on chaos, MD5 and DNA rules," Multimedia Tools and Applications, vol. 78, no. 7, pp. 9355-9382, 2019.custom:[[[-]]]
4 A. Shakiba, "A randomized CPA-secure asymmetric-key chaotic color image encryption scheme based on the Chebyshev mappings and one-time pad," Journal of King Saud University-Computer and Information Sciences, vol. 33, no. 5, pp. 562-571, 2021.custom:[[[-]]]
5 X. Wu, J. Kurths, H. Kan, "A robust and lossless DNA encryption scheme for color images," Multimedia Tools and Applications, vol. 77, no. 10, pp. 12349-12376, 2018.doi:[[[10.1007/s11042-017-4885-5]]]
6 S. M. Seyedzadeh, B. Norouzi, M. R. Mosavi, S. Mirzakuchaki, "A novel color image encryption algorithm based on spatial permutation and quantum chaotic map," Nonlinear Dynamics, vol. 81, no. 1, pp. 511-529, 2015.custom:[[[-]]]
7 X. Hu, L. Wei, W. Chen, Q. Chen, Y. Guo, "Color image encryption algorithm based on dynamic chaos and matrix convolution," IEEE Access, vol. 8, pp. 12452-12466, 2020.custom:[[[-]]]
8 S. S. Askar, A. A. Karawia, A. Alshamrani, "Image encryption algorithm based on chaotic economic model," Mathematical Problems in Engineering, vol. 2015, no. 341729, 2015.doi:[[[10.1155//341729]]]
9 X. Zhang, X. Wang, "Digital image encryption algorithm based on elliptic curve public cryptosystem," IEEE Access, vol. 6, pp. 70025-70034, 2018.custom:[[[-]]]
10 Z. Tang, J. Song, X. Zhang, R. Sun, "Multiple-image encryption with bit-plane decomposition and chaotic maps," Optics and Lasers in Engineering, vol. 80, pp. 1-11, 2016.custom:[[[-]]]
11 L. M. Adleman, "Molecular computation of solutions to combinatorial problems," Science, vol. 266, no. 5187, pp. 1021-1024, 1994.custom:[[[-]]]
12 S. K. Pujari, G. Bhattacharjee, S. Bhoi, "A hybridized model for image encryption through genetic algorithm and DNA sequence," Procedia Computer Science, vol. 125, pp. 165-171, 2018.custom:[[[-]]]
13 A. U. Rehman, H. Wang, M. M. A. Shahid, S. Iqbal, Z. Abbas, A. Firdous, "A selective cross-substitution technique for encrypting color images using chaos, DNA Rules and SHA-512," IEEE Access, vol. 7, pp. 162786-162802, 2019.custom:[[[-]]]
14 Z. Liu, C. Wu, J. Wang, Y. Hu, "A color image encryption using dynamic DNA and 4-D memristive hyper-chaos," IEEE Access, vol. 7, pp. 78367-78378, 2019.custom:[[[-]]]
15 J. Kalpana, P. Murali, "An improved color image encryption based on multiple DNA sequence operations with DNA synthetic image and chaos," Optik, vol. 126, no. 24, pp. 5703-5709, 2015.custom:[[[-]]]
16 X. Li, L. Wang, Y. Yan, P. Liu, "An improvement color image encryption algorithm based on DNA operations and real and complex chaotic systems," Optik, vol. 127, no. 5, pp. 2558-2565, 2016.custom:[[[-]]]
17 X. Y. Wang, H. L. Zhang, X. M. Bao, "Color image encryption scheme using CML and DNA sequence operations," Biosystems, vol. 144, pp. 18-26, 2016.doi:[[[10.1016/j.biosystems.2016.03.011]]]
18 X. Chai, Y. Chen, L. Broyde, "A novel chaos-based image encryption algorithm using DNA sequence operations," Optics and Lasers in engineering, vol. 88, pp. 197-213, 2017.custom:[[[-]]]
19 B. Mondal, T. Mandal, "A light weight secure image encryption scheme based on chaos & DNA computing," Journal of King Saud University-Computer and Information Sciences, vol. 29, no. 4, pp. 499-504, 2017.custom:[[[-]]]
20 X. Chai, X. Fu, Z. Gan, Y. Lu, Y. Chen, "A color image cryptosystem based on dynamic DNA encryption and chaos," Signal Processing, vol. 155, pp. 44-62, 2019.doi:[[[10.1016/j.sigpro.2018.09.029]]]
21 P. Li, J. Xu, J. Mou, F. Yang, "Fractional-order 4D hyperchaotic memristive system and application in color image encryption," EURASIP Journal on Image and Video Processing, vol. 2019, no. 22, 2019.doi:[[[10.1186/s13640-018-0402-7]]]
22 X. Wu, B. Zhu, Y. Hu, Y. Ran, "A novel color image encryption scheme using rectangular transform-enhanced chaotic tent maps," IEEE Access, vol. 5, pp. 6429-6436, 2017.doi:[[[10.1109/ACCESS.2017.2692043]]]
23 S. Hraoui, F. Gmira, M. F. Abbou, A. J. Oulidi, A. Jarjar, "A new cryptosystem of color image using a dynamic-chaos hill cipher algorithm," Procedia Computer Sciencevo. 148, pp. 399-408, 2019.custom:[[[-]]]
24 J. Wu, X. Liao, B. Yang, "Color image encryption based on chaotic systems and elliptic curve ElGamal scheme," Signal Processing, vol. 141, pp. 109-124, 2017.doi:[[[10.1016/j.sigpro.2017.04.006]]]
Published (Print): February 28 2022
Published (Electronic): February 28 2022
Corresponding Author: Menglong Wu* , [email protected]
Menglong Wu*, School of Information Science and Technology, North China University of Technology, Beijing, China, [email protected]
Yan Li, School of Information Science and Technology, North China University of Technology, Beijing, China, [email protected]
Wenkai Liu, School of Information Science and Technology, North China University of Technology, Beijing, China, [email protected] | CommonCrawl |
Izabella Łaba
Izabella Łaba (born 1966)[1] is a Polish-Canadian mathematician, a professor of mathematics at the University of British Columbia. Her main research specialties are harmonic analysis, geometric measure theory, and additive combinatorics.
Izabella Łaba
Born1966
NationalityPolish
Alma materUniversity of Toronto
Known forHarmonic Analysis, Geometric Measure Theory, Additive Combinatorics
AwardsCoxeter-James Prize (2004)
Krieger-Nelson Prize (2008)
Scientific career
FieldsMathematics
InstitutionsUniversity of British Columbia
Professional career
Łaba earned a master's degree in 1986 from the University of Wrocław.[2] She received her PhD from the University of Toronto in 1994, under the supervision of Israel Michael Sigal,[2][3] after which she was a postdoctoral scholar at University of California, Los Angeles and then an assistant professor at Princeton University before moving to UBC in 2000.[2]
She is one of three founding editors of the Online Journal of Analytic Combinatorics.[4]
Contributions
Łaba's thesis research proved the asymptotic completeness of many n-body systems in the presence of a constant magnetic field.[2][5] While at UCLA, with Nets Katz and Terence Tao, she made important contributions to the theory of Kakeya sets, including the best known lower bound on these sets in three-dimensional Euclidean spaces.[2][5] Her more recent work concerns harmonic analysis, periodic tilings, and Falconer's conjecture on sets of distances of points.[5]
Awards and honours
Łaba was the 2004 winner of the Coxeter–James Prize, an annual prize of the Canadian Mathematical Society for outstanding young mathematicians.[2] In 2008, the CMS honoured her again with their Krieger–Nelson Prize, given to an outstanding woman in mathematics.[5]
In 2012 she became a fellow of the American Mathematical Society.[6]
References
1. Birth date from Wolff, Thomas H. (2003), Łaba, Izabella; Shubin, Carol (eds.), Lectures on harmonic analysis, University Lecture Series, vol. 29, Providence, RI: American Mathematical Society, ISBN 0-8218-3449-5, MR 2003254, Library of Congress Cataloging-in-Publication Data, p. iv.
2. CMS 2004 Coxeter-James Prize: Dr. Izabella Łaba (University of British Columbia), Canadian Mathematical Society, retrieved 2013-01-28.
3. Izabella Laba at the Mathematics Genealogy Project
4. Editorial board, Online Journal of Analytic Combinatorics, retrieved 2013-01-28.
5. CMS 2008 Krieger-Nelson Prize: Dr. Izabella Łaba (University of British Columbia), retrieved 2013-01-28.
6. List of Fellows of the American Mathematical Society, retrieved 2013-01-27.
External links
• Home page
• The Accidental Mathematician, Łaba's blog
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
Academics
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
Other
• IdRef
| Wikipedia |
\begin{definition}[Definition:Bounded Metric Space]
Let $M = \struct {A, d}$ be a metric space.
Let $M' = \struct {B, d_B}$ be a subspace of $M$.
\end{definition} | ProofWiki |
Biologically encoded magnonics
Benjamin W. Zingsem ORCID: orcid.org/0000-0002-9899-27001,2,
Thomas Feggeler ORCID: orcid.org/0000-0003-1817-22761,
Alexandra Terwey1,
Sara Ghaisari3,
Detlef Spoddig1,
Damien Faivre ORCID: orcid.org/0000-0001-6191-33893,4,
Ralf Meckenstock1,
Michael Farle1 &
Michael Winklhofer ORCID: orcid.org/0000-0003-1352-97231,5,6
Nature Communications volume 10, Article number: 4345 (2019) Cite this article
Spin wave logic circuits using quantum oscillations of spins (magnons) as carriers of information have been proposed for next generation computing with reduced energy demands and the benefit of easy parallelization. Current realizations of magnonic devices have micrometer sized patterns. Here we demonstrate the feasibility of biogenic nanoparticle chains as the first step to truly nanoscale magnonics at room temperature. Our measurements on magnetosome chains (ca 12 magnetite crystals with 35 nm particle size each), combined with micromagnetic simulations, show that the topology of the magnon bands, namely anisotropy, band deformation, and band gaps are determined by local arrangement and orientation of particles, which in turn depends on the genotype of the bacteria. Our biomagnonic approach offers the exciting prospect of genetically engineering magnonic quantum states in nanoconfined geometries. By connecting mutants of magnetotactic bacteria with different arrangements of magnetite crystals, novel architectures for magnonic computing may be (self-) assembled.
Future demands for computational power cannot be met by the current electrically powered silicon-based technology due to fundamental physical and economical limits in power consumption, generation of heat, and electromigration1,2. While the computing capacity of the human brain is equivalent to exaFLOPS (1018 floating point operations per second)3, operating at a moderate 37 °C, the most advanced super computers today are limited to petaFLOPS (1015) whilst being immensely power-hungry in order to perform such computations and even more so to dissipate the heat which is generated in the process. This problem has led to the investigation of alternative approaches to integrated micro processing like quantum or magnonic computing4,5,6,7,8,9,10,11,12. Other future concepts envision parallel computation networks inspired by macromolecular chemistry and cell biology, exploiting the possibilities of chemically controlled self-assembly—for example cytoskeletal filaments propelled by motor proteins through a hierarchical mesh of nanofabricated channels13.
Here we suggest the combination of biology and solid state magnetism to control magnetic quantum excitations—called magnons—in genetically engineered networks of magnetotactic bacteria for resource efficient computing. We demonstrate that spin wave dispersions can be modified through bacterial genetics, paving the way towards bio-magnonic computing. Although such magnetic nano-assemblies can be constructed in various ways, this biogenic approach yields the potential to drastically reduce the environmental footprint14 of electronic devices, as biological cells are used instead, and the active material (magnetite, Fe3O4) is sustainably recyclable. Furthermore, the use of self-reproducing biological cells allows for a low cost natural exponential upscaling. Additionally, magnons as carriers of information drastically reduce energy demands as compared to conventional electronics because less energy is needed to excite a magnon and negligible amounts of energy are dissipated into heat8,12. Another advantage of magnonic logic gates is the prospect of using operation frequencies in the terahertz regime15, three orders of magnitude above what is achievable in current semiconductor electronics. Thus, a magnon processor the size of a conventional CPU chip would not only beat current silicon technology (Supplementary Fig. 1), but has the potential to outperform the human brain, scaling the ambitious computing facility envisioned by the Human Brain Project3 down to a single chip—as is demonstrated in the supplementary text.
Magnonic devices based on lithography have been considered for many innovative, energy efficient computer technologies such as information transport and logic circuits operating at high efficiency8,16, but thus far have been realized on the micrometer scale only4,5,6,7,10,11. We demonstrate the potential of nano-sized biogenic magnetite crystals as truly nanoscale magnonic devices using bacteria with genetically encoded arrangements of nanoscale, dipolar-coupled magnets. This biomagnonic approach offers the perspective of tailoring future computing devices, employing biological tools such as directed evolution17, genetic engineering, or synthetic biology18.
Spin waves—or their quanta called magnons—are collective oscillations of dipolar or exchange-coupled magnetic moments in a magnetic solid. They can be excited thermally or via microwaves at energies as low as 10 µeV. Their wavelength can be substantially shorter than the corresponding wavelength of electromagnetic waves in vacuum and controlled by geometry and size (finite size effects and nanoscale geometry) as well as by magnetostatic coupling between magnetic moments. Magnonic information exchange is mediated by Joule-heat-free transfer of spin information over long distances, which makes spinwaves excellent candidates for so-called magnon spintronics and computing4,5,6,7,8,9,10,11,12.
In the following, we experimentally show and computationally confirm that the magnonic fine structure of nanoparticle chains, and thereby the spatial amplitude and phase profile, can be altered by means of genetic mutations in magnetotactic bacteria. As an experimental realization of a nano-magnonic system, we analyzed magnetosome chains in the magnetotactic bacterium Magnetospirillium gryphiswaldense (strain MSR-1, wildtype), whose well-defined magnetic properties have been comprehensively studied before19. These chains are composed of magnetite single crystals, which are magnetic single domains and oriented with one of their magnetic easy <111> axes along the chain axis20,21,22,23,24. We selected cells containing a single-strand chain each, consisting of up to 12 magnetite crystals (35 nm crystal size) arranged in a linear configuration within the cell body. Each crystal is enclosed in a vesicle composed of a lipid-bilayer membrane (ca. 4 nm thickness)25, and different membrane proteins controlling the magnetosome chain formation26. The organic, non-magnetic, spacer material separates adjacent magnetic crystals by at least 8 nm25 and thus completely suppresses magnetic exchange interactions between them. Hence, collective magnetic phenomena in the magnetosome chain, such as spinwaves, are purely mediated by magnetostatic coupling between the particles.
FMR spectra of a single magnetosome chain exhibits magnonic band gaps
To study magnetotactic bacteria at the single cell level we used a resonant microcavity (ref. 27, see Fig. 1 and Supplementary Fig. 2) at X-band microwave frequency (9.1 GHz) to excite and detect dipolar coupled (magnetostatic) spin waves in the magnetosome chain as a function of the strength and in-plane angle of a homogeneous magnetic field at room temperature. In the resulting ferromagnetic resonance (FMR) absorption spectra (Fig. 2a), one can recognize several FMR lines characterized by a strong angular-dependent anisotropy. The spectra exhibit a 180° periodicity, and the two most prominent lines (dashed) reveal the prevailing uniaxial shape anisotropy of the two linear chain segments, as confirmed with a micromagnetic model (Fig. 2b). For both lines, the anisotropy splitting ranges from 240 mT (the so-called low-field resonance) to 355 mT (high-field resonance), which indicate magnetostatically easy and hard axes, parallel and perpendicular to a chain axis, respectively. As can be seen from individual spectra in (Fig. 2a 0°) and (Fig. 2a 90°), the typical resonance line-width is only about 1 mT, much narrower than the ca 40 mT—broad lines observed in a magnetically pre-aligned sample containing billions of magnetotactic bacteria cells28. Previous integral FMR absorption spectra of such bulk samples of cells19,28,29,30 were explained by a theoretical model assuming that each chain responds like a single particle (ellipsoid) with an effective uniaxial anisotropy along a cubic <111> axis31. Here we show that the experimental FMR spectrum of a single cell (Fig. 2a) exhibits a diversity of features that are far beyond the simplified single particle model. Most notably, we observe a formation of spectral splitting of resonance modes in their angular dependence. Due to the strong exchange coupling inside each particle, continuous spatial variations of the phase within a given particle are forbidden (i.e., nonzero k-vectors cannot be accommodated within a particle). Thus, spectral gaps between different mode patterns are, in fact, band gaps in the k-dependent spin-wave spectrum. This peculiarity can, for example, be observed at 90° (between 270 mT and 300 mT), which lies between the hard directions of the two major chain segments. Further examples of band-gaps can be seen throughout the spectrum. Another interesting aspect are bands that have unusual curvature, with the most pronounced example being the flat band ranging from 45° to 90° at 290 mT.
Measurement principle: The magnetobacterium cell (left) is positioned inside the omega-shaped cavity of a microresonator (see also Supplementary Fig. S2) and is exposed to an RF-magnetic field (blue field lines) produced by the microwave electric current. The microwave energy, which is coupled into the micro stripline is tuned to the eigenfrequency of the microresonator, geometrically defined by stubs A and B. A static external magnetic field is applied in the plane of the resonator setup and alters the resonance frequency of the particles as the field strength is decreased (Eq. 1). If one or multiple particles resonantly absorb at the fixed microwave frequency, the eigenfrequency of the system is perturbed, resulting in an increase of the reflected microwave power (a spectral peak). Each spectrum is acquired by sweeping the magnitude of the external magnetic field at a different angle
Micro-FMR spectra of a single magnetic bacterium containing a straight chain of about a dozen magnetic nanocrystals. a Angular dependence of FMR spectra for the cell shown in the inset with magnetite particles appearing as bright dots; for each of the 180 angles, an individual FMR spectrum was recorded from 150 mT to 400 mT magnetic field strength. Two single spectra recorded at 0° and 90° are shown to the right (a 0°) & a 90°)). The amplitude is color-coded according to the color scale next to a), which is used throughout. The high-amplitude band at about 320 mT (g = 2.0 at 9.0 GHz) corresponds to an EPR (electron paramagnetic resonance) background. The two main FMR lines (marked with dashes) have maxima at 70° (355 mT) and 110° (355 mT), respectively, and are caused by the two main segments of the chain. (Inset) SEM micrograph of the magnetotactic bacterium (MSR-1, wildtype) in the resonator. The left, short segment of the particle chain is offset from the main chain segment. The bright contrast band on the top left a section of the resonator loop. b FMR spectra obtained from a micromagnetic model of the magnetosome chain, computed for an applied field of 360 mT in the frequency domain (which is reciprocal to the field domain according to Eq. 1). As a guide to the eye, the two prominent lines corresponding to the anisotropy of chain segments are highlighted in accordance with a. The black dashed line indicates the position where the EPR line would be. c Each resonance line is marked in the color of the particle (inset) that gives the major contribution to that line, to help understand where a resonance mode is localized in the chain
Micromagnetic understanding of FMR spectra
To gain insight into the origin of the evolution of these resonances, we performed GPU-based micromagnetic calculations using MuMax332. We adopted the geometry of the model structure from the scanning electron microscopy (SEM) in-plane projected images of the magnetosome chains and used the material parameters for magnetite (see "Methods"). To model the microwave excitation, we designed a spatially uniform, time-dependent field pulse containing all frequencies between 1 and 29 GHz with equal amplitude. The dynamic magnetic response was obtained in the frequency domain by Fourier transformation. This procedure was performed for two applied magnetic field strengths in saturation to confirm that all frequency-dependent features project linearly under variation of the applied field, i.e.,
$$f = \frac{{g\mu _{\mathrm{B}}}}{h}\left( {B_{{\mathrm{app}}} + B_{{\mathrm{anis}}}} \right)$$
where g is the g-factor (2.1 for magnetite33), μB is the Bohr magneton, h is the Planck-constant, Bapp is the applied field, and Banis is the anisotropy field, which here, i.e., for sufficiently large applied fields, does not depend on the applied field34. We observe a good agreement between experimental (Fig. 2a) and simulated spectrum (Fig. 2b), in terms of the resonance field range, band gaps, and band deformations. Based on the micromagnetic analysis, we can identify the origin of the two main modes in the experimental spectrum (Fig. 2). The majority criterion applied to the data in Fig. 2c allows a more detailed view on which mode is dominated by which particle. For example, modes dominated by the light green and orange particle at either chain-end exhibit much smaller anisotropy than modes dominated by the pink particle. Their anisotropy span (Δf) is about 1.5 GHz, while the pink particle dominates modes spanning ca. 3 GHz. This reduction in effective anisotropy occurs because these end particles have only one neighbor each, and therefore experience smaller local effective dipolar-induced anisotropy, hence a lower Banis. Remarkably, continuous lines are not always dominated by the same particle. For example, the pink particle dominates the intensive mode at low field (260 mT, 0°) until about 30°, where the dark green particle takes over. In the angular range between 30° and 55°, the pink particle jumps across a spectral gap and dominates a different mode at high field from 50° onward; from 140° to 160° the pink particle has a large part in two different resonance modes. One between 10 and 11 GHz, the other one between 11 and 12 GHz. Similar features can be observed throughout the spectrum.
To further understand the emergence of spectral features, we have separately modeled the two segments of the chain consisting of eight and four particles, respectively (see Fig. 3). The spectra shown in Fig. 3a, b confirm the assignment of the experimental spectrum (Fig. 2a) to the chain segments. More importantly, the simulated spectrum of the complete chain (Fig. 3d) has fewer lines compared to the superposition (Fig. 3c) of spectra a and b. This reduction is a direct result of coupling between the two segments, which promotes collective oscillations. Another consequence of the inter-segment coupling is an increase in the anisotropy in the short chain segment by about 0.5 GHz.
Micromagnetic dissection of a magnetosome chain. Simulated FMR spectra for different segments of the magnetosome chain shown in inset of Fig. 2a. a Long segment with eight particles in a roughly linear configuration. b Short segment with the four particles which are slightly offset from the main segment. c The superposition of the spectra (a) and (b). d Complete chain. The difference between the spectra (c) and (d) is due to magnetostatic interactions between the two chain segments present in the twelve-particle model (d). e Left, enlarged SEM micrograph of the particle chain from the inset in Fig. 2a. Right, geometric model of the particle chain, showing the <100> crystallographic axis system (arrows) at the center positions of each particle (as determined from the SEM micrograph). The crystallographic axis system of each particle is assumed to be oriented such that one of its four <111> magnetic easy axes, shown as black rod, is aligned towards the nearest neighbor particle. The spatial orientation of the other three <111> axes of the particle varies randomly from one particle to the next. In magnetite, at temperatures above the isotropic point of 135 K, the <100> axes represent magnetic hard axes and the <111> axes represent magnetic easy axes
Geometry of magnetosome chain determines magnonic band structure
Since the nonlinearity of the chain structure in Fig. 2 appears to be responsible for a multitude of features, we next selected cells containing strongly curved magnetosome chains of the so-called ΔmamK mutant of the bacterium (Fig. 4b and Supplementary Fig. 3). The mutant lacks MamK, an actin-like cytoskeleton protein, which in the wildtype has a role in positioning the magnetosome chain in a midcell location and in equipartitioning it among daughter cells35,36,37. As a result of the lack of this protein, we observe an increased amount of strongly curved and dendritic particle assemblies (Supplementary Fig. 4). Because of the chain curvature, the experimental spectrum (Supplementary Fig. 3c) of just two cells of the ΔmamK mutant exhibits a much broader variety of resonance lines, each following a different angular dependence. To gain microscopic insight into the correlation of spectral fine structure and geometric arrangement, we fed the SEM image (Fig. 4b) into our micromagnetic model. As we learned from the simulations of the straight chain (Figs. 2, 3), the dominant anisotropy is due to particle configuration (local effective shape anisotropy). Therefore, we now use the simplest model capable of explaining the essential features of the measured spectrum (i.e., band gaps and band deflection). That is, we here neglect higher order energy density terms such as cubic magnetocrystalline anisotropy, which is one order of magnitude smaller than the magnetostatic energy in the particles with nearly cubic shape.
Correlation of magnonic fine structure and geometric configuration of particles. a Simulated FMR spectra of seven magnetic nanocrystals in a coiled arrangement (see inset) reproduced as silhouettes from an electron micrograph (b) of a cell of the ∆mamK-mutant of MSR-1 (experimental spectra see Supplementary Fig. 4b). In (a), each resonance line is marked in the color of the particle that gives the major contribution to the amplitude of that line. The two end particles in (a) (blue and pink) are weakly coupled to the chain and their resonances are unperturbed over most of the angular range. Their uniaxial sin(2ϕ)-dependence, however, become distorted or even interrupted where a resonance line of the nearest neighbor is encountered. E.g., at about 100°, the pink particle most strongly interacts with the gray particle because the applied field is coaxial with the connecting line between the two particles. In consequence, the pink mode is deflected toward the gray mode with which it merges. In contrast, at 120°, the external field destabilizes the interaction between the orange particle and the green particle, so that the green mode can interact with the blue mode, causing a band gap in the blue mode. The apparent sin(4ϕ) dependence of the pronounced resonance line on the top is a sequential merger of the five in-chain particles resonating one after another as the field angle is varied around the chain bend. Thus, each particle's resonance field/frequency mostly follows its local effective anisotropy. c Analysis of the spatial contributions to the upper-envelope line of the spectrum. Each letter corresponds to a position on the envelope and the phase and amplitude of the magnonic response is mapped according to the color scale (d) onto each point of the simulation grid (2.6 nm mesh size). Phase and amplitude are uniform within a given particle, with red hue indicating resonant response (90° phase relative to excitation) and cyan hue indicating opposite-to-resonant response. The standing spin wave associated with this continuous resonance line (envelope) shifts through the discrete elements of the chain as the magnetic field angle is varied
Analogous to Fig. 2c, we applied a majority criterion to the simulated spectra to understand the microscopic origin of the band structure. From this representation (Fig. 4a), one can see that the continuous resonance line at the top (upper envelope) originates from chain segments that are approximately aligned with the external field, in the sense that the line is composed of easy axis resonances; similarly, the lower boundary of the magnon diagram is composed of hard axis resonances only. For example, the upper envelope is dominated by particle 5 between 40 and 100°, because this chain segment experiences the largest projection of the external magnetic field compared to the other segments. At 100°, the amplitude shifts around the bend of the chain, and particle 3 dominates from 100° to 150°, until particle 2 takes over. Thus, the absorption amplitude moves through the chain from particle to particle as a function of the field angle and causes magnon band deformation.
At the same time, each continuous line segment tracks the local effective anisotropy of one or multiple particles, that is, particles inside shorter chain segments have smaller anisotropies than particles in longer chain segments, for example, compare the pink (1) and yellow (5) line in Fig. 4a. However, each particle can only react to its local dipolar-induced anisotropy where band deformation is not favored, thus creating band gaps. In other words, band gaps are induced by dipolar coupling between neighboring particles i.e. closely spaced particles induce larger band gaps where the resonance lines cross, whilst particles that are further apart cause smaller or no band gaps. This principle can be generalized to correlate the experimentally observed spectral gaps with spatial gaps, as schematically illustrated in Fig. 5.
Relation between energy gap and spatial gap. The energy gap (solid lines) between magnonic eigenstates (modes) of coupled particles decreases with the particle distance and depends on the number of particles involved in the modes. The arrows indicate the phase of the oscillating magnetic moment in each particle (squares). The red curve shows the band gap between the antiparallel oscillation mode and the parallel oscillation mode, i.e., same phase versus opposite phase resonance. The analog in a continuous system would be the transition between a uniform mode and a mode where the wavelength is equal to the length of the system. The blue curve shows the band gap between an antisymmetric short-wavelength mode (lower blue squares) and a symmetric mode with twice the wavelength (upper blue squares). Closely spaced particles (which have stronger dipolar coupling) exhibit larger spectral gaps between such modes than distant particles do. The discontinuous nature of the particle chain discretizes the modes in k-space, such that these spectral gaps in the collective eigenmode spectrum are also band gaps
Furthermore, band gaps can be observed for more complex modes involving multiple particles, which can accommodate different standing spinwave patterns. These are dipolar coupled modes in which each particle resonates uniformly, albeit with different phase relationships between the particles. Examples for simple band gaps can be seen in Fig. 4a near 30°, where the resonances of particle 2 (gray) and 3 (light blue) show repulsive behavior, resulting in a small bandgap. Similarly, at about 100°, a splitting of resonances between particles 1 (pink) and 2 (gray) occurs. As expected, the splitting is smaller for resonances involving next-nearest neighbors than for nearest neighbors. Consequently, no band gap is observable near 65°, where the end particles 1 (pink) and 7 (dark blue) resonate.
Spatial localization of spin-wave amplitude
Building on the knowledge that the geometric chain features have a direct influence on the magnonic structure, we next analyzed the spatial distribution of magnon amplitude to understand the microscopic origin of the angular-dependent band structure. For this purpose, we present in Fig. 4c, the spatial distribution of spin-wave amplitude exemplarily for the continuous resonance line forming the upper envelope of the FMR spectrum simulated for the curved chain. It now becomes apparent that the envelope line does not represent the anisotropy of a single chain segment, but, surprisingly, represents a standing spin wave that shifts through the chain as the magnetic field angle is changed. Thereby, the spin-wave amplitude is always localized in those chain segments that are approximately aligned with the external field, in the sense that the line is composed of easy axis resonances, as was already inferred from the majority criterion (Fig. 4a). At 150° (position V, W in Fig. 4c), the amplitude is localized in particle pair 2 and 3, then at 100° (O-Q) in pair 3 and 4, and at 60° (H-J) in the triplet 4, 5, and 6.
As a general rule, we observe that coupled systems avoid line crossings and instead exhibit band deflection at points where the resonances of decoupled systems would otherwise cross, which can be most prominently observed at O-R in Fig. 4c. This effect leads to unusual line deformations, which deviate from the symmetry of chain segments and are mediated by a spatially shifting spin-wave amplitude. On the other hand, when looking at any short enough line segment, one finds the local effective anisotropy of one or multiple particles to be represented. For example, the triplet 4, 5, 6 in Fig. 4c has a resonance line tracking its anisotropy along the path from F to M. This tracking is further illustrated in Fig. 4a, where the local effective anisotropy of individual particles becomes apparent although the lines are intersected by band gaps.
In summary, our results suggest the first step to a novel nanomagnonic concept based on dipolar coupled assemblies of nanoparticles. We further illustrate that there is leverage to genetically engineer and biologically grow such nanoparticle ensembles suggesting a new research toward biomagnonics. In our room temperature nanoscale magnonic device, i.e., a magnetosome chain, each particle performs spatially uniform oscillations because the particle size is well below the wavelength of so-called exchange spin waves38. As opposed to a diluted magnetic nanoparticle system, where each particle's resonance mode represents a continuous function of the applied field angle, the proximity of the particles in a dipolar coupled assembly promotes complex phase relationships and thus produces distinct features such as band gaps and band deformations. These features can be harnessed for device applications as they are tunable via the geometric arrangement and morphology of the particles. The underlying biological machinery used in this work may in future be exploited by genetic manipulation and directed evolution toward magnonic devices with desired transport properties.
Cultivation of magnetotactic bacteria
M. gryphiswaldense strain MSR-1 (DSM6361) as well as ΔmamK mutants were used in this study. The ΔmamK strain (ref. 35) was a gift from Dirk Schüler at University of Bayreuth. The cultivation medium was prepared as described earlier39. Briefly, 20 mL of cells from a subculture were transferred to 200 mL of medium in 0.5 L rubber-sealed flasks, with microaerobic conditions (1% O2 in the headspace). Then, the flasks were incubated under gentle shaking (100 Rpm) at 28°C for 24 h (72 h for ΔmamK).
Bacteria cells were mixed with HEPES (2-[4-(2-hydroxyethyl)piperazin-1-yl]ethanesulfonic acid) buffer at pH = 7 to prevent them from osmotic bursting. The mixture of bacteria, nutrient medium, and HEPES was then centrifuged at 9000 rpm for 5 min at a temperature of 4 °C. Four centrifuge runs were performed and, after each run, the supernatant was substituted with HEPES buffer. The resulting pellet containing the bacteria cells then was re-dispersed in buffer solution, diluted, and transferred into a microcapillary mounted on a micromanipulator unit to deposit single cells of bacteria in a microresonator (Fig. S2). To obtain the single-cell sample shown in Figs. 2a, 3e, we used a focussed-Ga-ion beam (FEI Helios NanoLabTM 600 Dual beam FIB/SEM system) to ablate extra cells of magnetic bacteria in the microresonator; electron micrographs from the samples (Figs. 2a, 3e and 4b) were obtained with the secondary electron detector in the FIB/SEM system at 10 kV acceleration voltage. After SEM inspection, the microresonator was connected via a low-loss semirigid coaxial cable to a conventional microwave bridge (Varian E102). The spectra were acquired with a modulation field of 0.5 mT and 123.45 Hz frequency and recorded with a Stanford-Research SR 830 DSP lock-in amplifier (original recordings available as SI material).
For transmission electron microscopy (Supplementary Fig. 4), cells from a pellet were dropped on a grid, air dried, and imaged in bright-field mode at 120 kV under an EM 912 Omega (Carl Zeiss Oberkochen).
Micromagnetic simulations
Simulations were set up in mumax332 version 3.9.1 for the straight chain shown in Figs. 2 and 3 and for each of the curved chains shown in Fig. 4 and Supplementary Fig. 3 individually. Rather than using idealized geometries with perfectly aligned particle faces, which may produce symmetry-induced artifacts, we approximated the lateral morphology of a chain by binarizing SEM images. The binarized images where rescaled such that each pixel in the image corresponds to a pixel in the simulation grid. The height of each particle was fixed to a value of 36 nm. The cell size was set to 2.6 nm by 2.6 nm by 8 nm with a grid size of 64 by 128 by 6 elements. The simulation parameters, chosen to approximate magnetite, where set to Aex = 1.32e−11 [J/m]40, Msat = 4.8e5 [A/m], k1c = −1.10e4 [J/m3]33, and gammaLL = 1.85556e11 [rad/(Ts)] corresponding to a g-factor of 2.1133. For the simulations of the linear chain with kink (Figs. 2 and 3) we included the cubic magnetocrystalline anisotropy. For the simulations of the crooked and ring-shaped chains (e.g., Fig. 4 and Supplementary Fig. 3), we use the simplest model capable of explaining the essential features of the measured spectrum (i.e., band gaps and band deflection). That is, we here neglect cubic magnetocrystalline anisotropy, which is one-order of magnitude smaller than the magnetostatic energy in the particles with nearly cubic shape.
The following procedure was performed for each chain separately for static field magnitudes of 280 and 360 mT. The applied field angle was iterated from 0° to 180° in steps of 1° in the plane of the chains. For each applied field configuration the magnetization of the system was allowed to relax into energetic equilibrium. Then a time dependent field pulse was modeled as a sinc function containing all frequencies from 0 to 30 GHz and the simulation was ran such that each frequency contributes with approximately the same amplitude. An example mumax input file for a chain consisting of eight particles (whose full angular dependence is shown in Fig. 3a) is available as Supplementary Information.
The Supplementary Information contains the measured FMR data from Fig. 2a in tab-separated text format as well as an exemplary MuMax input script for the micromagnetic simulation of a magnetosome chain exposed to a broadband pulsed magnetic field. All relevant simulation output data are available from the authors upon request.
Code availability
Our custom Java program for extracting the dynamic response of each cell of the simulated micromagnetic structure is available at: https://github.com/BenjaminWZingsem/Spatial_FFT_MumaxData.git along with a series of MuMax output files that we simulated and processed for the paper (Fig. 3a) is available as Supplementary Information.
Denning, P. J. & Lewis, T. G. Exponential laws of computing growth. Commun. ACM 60, 54–65 (2017).
Stahlmecke, B. et al. Electromigration in self-organized single-crystalline nanowires. Appl. Phys. Lett. 88, 053122 (2006).
Markram, H. The human brain project. Sci. Am. 306, 50–55 (2012).
Khitun, A., Bao, M. & Wang, K. L. Magnonic logic circuits. J. Phys. D: Appl. Phys. 43, 264005 (2010).
Sato, N., Sekiguchi, K. & Nozaki, Y. Electrical demonstration of spin-wave logic operation. Appl. Phys. Expr. 6, 063001 (2013).
Urazhdin, S. et al. Nanomagnonic devices based on the spin-transfer torque. Nat. Nanotech. 9, 509–513 (2014).
Vogt, K. et al. Realization of a spin-wave multiplexer. Nat. Commun. 5, 3727 (2014).
Chumak, A. V., Vasyuchka, V. I. & Hillebrands, B. Magnon spintronics. Nat. Phys. 11, 453–461 (2015).
Wagner, K. et al. Magnetic domain walls as reconfigurable spin-wave nanochannels. Nat. Nanotech. 11, 432–436 (2016).
Brächer, T. et al. Phase-to-intensity conversion of magnonic spin currents and application to the design of a majority gate. Sci. Rep. 6, 38235 (2016).
Fischer, T. et al. Experimental prototype of a spin-wave majority gate. Appl. Phys. Lett. 110, 152401 (2017).
Chumak, A. V., Serga., A. A. & Hillebrands, B. Magnonic crystals for data processing J. Phys. D: Appl. Phys. 50, 244001 (2017).
Nicolau, D. V. et al. Parallel computation with molecular-motor-propelled agents in nanofabricated networks. Proc. Natl Acad. Sci. USA 113, 2591–2596 (2016).
Villard, A., Lelah, A. & Brissaud, D. Drawing a chip environmental profile: environmental indicators for the semiconductor industry. J. Clean. Prod. 86, 98–109 (2015).
Chuang, T.-H. et al. Magnetic properties and magnon excitations in Fe(001) films grown on Ir(001). Phys. Rev. B 89, 174404 (2014).
Wang, Q. et al. Reconfigurable nanoscale spin-wave directional coupler. Sci. Adv. 4, e1701517 (2018).
Shapiro, M. G. et al. Directed evolution of a magnetic resonance imaging contrast agent for noninvasive imaging of dopamine. Nat. Biotechnol. 28, 264–270 (2010).
Kolinko, I. et al. Biosynthesis of magnetic nanostructures in a foreign organism by transfer of bacterial magnetosome gene clusters. Nat. Nanotechnol. 9, 193–197 (2014).
Fischer, H. et al. Ferromagnetic resonance and magnetic characteristics of intact magnetosome chains in Magnetospirillum gryphiswaldense. Earth Planet. Sci. Lett. 270, 200–208 (2008).
Frankel, R. B., Blakemore, R. P. & Wolfe, R. S. Magnetite in freshwater magnetotactic bacteria. Science 203, 1355–1356 (1979).
Dunin-Borkowski, R. E. et al. Magnetic microstructure of magnetotactic bacteria by electron holography. Science 282, 1868–1870 (1998).
Matsuda, T. et al. Morphology and structure of biogenic magnetite particles. Nature 302, 411–412 (1983).
Körnig, A. et al. Magnetite crystal orientation in magnetosome chains. Adv. Funct. Mater. 24, 3926–3932 (2014).
Fischer, A., Schmitz, M., Aichmayer, B., Fratzl, P. & Faivre, D. Structural purity of magnetite nanoparticles in magnetotactic bacteria. J. R. Soc. Interface 8, 1011–1018 (2011).
Gorby, Y. A., Beveridge, T. J. & Blakemore, R. P. Characterization of the bacterial magnetosome membrane. J. Bact. 170, 834–841 (1988).
Faivre, D. & Schüler, D. Magnetotactic bacteria and magnetosomes. Chem. Rev. 108, 4875 (2008).
Narkowicz, R., Suter, D. & Niemeyer, I. Scaling of sensitivity and efficiency in planar microresonators for electron spin resonance. Rev. Sci. Instr. 79, 084702 (2008).
Charilaou, M., Kind, J., Garcia-Rubio, I., Schüler, D. & Gehring, A. U. Appl. Phys. Lett. 104, 112406 (2014).
Weiss, B. P. et al. Ferromagnetic resonance and low-temperature magnetic tests for biogenic magnetite. Earth. Planet. Sci. Lett. 224, 73–89 (2004).
Ghaisari, S. et al. Magnetosome organization in magnetotactic bacteria unravelled by ferromagnetic resonance spectroscopy. Biophys. J. 113, 637–644 (2017).
Charilaou, M., Winklhofer, M. & Gehring, A. U. Simulation of ferromagnetic resonance spectra of linear chains of magnetite nanocrystals. J. Appl. Phys. 109, 093903 (2011).
Vansteenkiste, A. et al. The design and verification of MuMax3. AIP Adv. 4, 107133 (2014).
Bickford, L. R. Jr. Ferromagnetic resonance absorption in magnetite single crystals. Phys. Rev. 78, 449–457 (1950).
Zingsem, B. W., Winklhofer, M., Meckenstock, R. & Farle, M. A unified description of collective magnetic excitations. Phys. Rev. B 96, 224407 (2017).
Katzmann, E. et al. Loss of the actin-like protein MamK has pleiotropic effects on magnetosome formation and chain assembly in Magnetospirillum gryphiswaldense. Mol. Microbiol. 77, 208–224 (2010).
Katzmann, E. et al. Magnetosome chains are recruited to cellular division sites and split by asymmetric septation. Mol. Microbiol. 82, 1316–1329 (2011).
Toro-Nahuelpan, M. et al. Segregation of prokaryotic magnetosomes organelles is driven by treadmilling of a dynamic actin-like MamK filament. BMC Biol. 14, 88 (2016).
Rado, G. T. & Weertman, J. R. Spin-wave resonance in a ferromagnetic metal. J. Phys. Chem. Solids 11, 315–333 (1959).
Faivre, D., Menguy, N., Posfai, M. & Schüler, D. Environmental parameters affect the physical properties of fast-growing magnetosomes. Am. Mineral. 93, 463–469 (2008).
Heider, F. & Williams, W. Note on temperature dependence of exchange constant in magnetite. Geophys. Res. Lett. 15, 184–187 (1988).
We would like to thank Rafal E. Dunin-Borkowski for critical reading of the manuscript and helpful discussions. The research leading to these results has received funding from the Deutsche Forschungsgemeinschaft (DFG Wi1828/4-2 to M.W., DFG Ol513/1-1 to R.M. and T.F.), the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement number 320832, the Max Planck Society and the European Research Council (ERC starting grant to D.F., no. 256915 MB2). S.G. was supported by the International Max Planck Research School (IMPRS) on Multiscale Biosystems. We acknowledge support by the Open Access Publication Fund of the University of Oldenburg.
Faculty of Physics and Center for Nanointegration (CENIDE), University Duisburg-Essen, 47057, Duisburg, Germany
Benjamin W. Zingsem
, Thomas Feggeler
, Alexandra Terwey
, Detlef Spoddig
, Ralf Meckenstock
, Michael Farle
& Michael Winklhofer
Ernst Ruska Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich GmbH, 52425, Jülich, Germany
Department of Biomaterials, Max-Planck Institute of Colloids and Interface Science, Golm, 14476, Potsdam, Germany
Sara Ghaisari
& Damien Faivre
CEA/CNRS/Aix-Marseille Université, UMR7265 Institut de biosciences et biotechnologies, Laboratoire de Bioénergétique Cellulaire, 13108, Saint Paul lez Durance, France
Damien Faivre
Institut für Biologie und Umweltwissenschaften, Carl von Ossietzky Universität Oldenburg, 26129, Oldenburg, Germany
Michael Winklhofer
Research Center Neurosensory Science, University of Oldenburg, D-26111, Oldenburg, Germany
Search for Benjamin W. Zingsem in:
Search for Thomas Feggeler in:
Search for Alexandra Terwey in:
Search for Sara Ghaisari in:
Search for Detlef Spoddig in:
Search for Damien Faivre in:
Search for Ralf Meckenstock in:
Search for Michael Farle in:
Search for Michael Winklhofer in:
B.Z., R.M., D.F., M.F. and M.W. conceived the project. A.T. and S.G. prepared samples. D.S. performed FIB and SEM. A.T. and R.M. performed measurements. B.Z., T.F. and M.W. designed and performed micromagnetic simulations. B.Z. processed the data. B.Z., R.M. and M.W. analyzed the data. B.Z. and M.W. wrote the paper. M.F. provided detailed feedback on the manuscript. All authors discussed the results and commented on the manuscript.
Correspondence to Benjamin W. Zingsem or Michael Winklhofer.
Peer review information Nature Communications thanks Jose Dominguez-Vera, Andrii Chumak and other, anonymous, reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Zingsem, B.W., Feggeler, T., Terwey, A. et al. Biologically encoded magnonics. Nat Commun 10, 4345 (2019) doi:10.1038/s41467-019-12219-0
3D magnonic network: Vertical spin-wave transport in the magnonic waveguides with broken translation symmetry
Alexandr Martyshkin
, Evgenii N. Beginin
, Alexander Stognij
, Sergey A. Nikitov
& Alexandr Sadovnikov
IEEE Magnetics Letters (2019) | CommonCrawl |
Treatment policy change to dihydroartemisinin–piperaquine contributes to the reduction of adverse maternal and pregnancy outcomes
Jeanne Rini Poespoprodjo1,2,3,
Wendelina Fobia2,
Enny Kenangalem1,2,
Daniel A Lampah2,
Paulus Sugiarto4,
Emiliana Tjitra5,
Nicholas M Anstey6,7 &
Richard N Price6,8
In Papua, Indonesia, maternal malaria is prevalent, multidrug resistant and associated with adverse outcomes for mother and baby. In March 2006, anti-malarial policy was revised for the second and third trimester of pregnancy to dihydroartemisinin–piperaquine (DHP) for all species of malaria. This study presents the temporal analysis of adverse outcomes in pregnancy and early life following this policy change.
From April 2004 to May 2010, a standardized questionnaire was used to collect information from all pregnant women admitted to the maternity ward. A physical examination was performed on all live birth newborns. The relative risks (RR) and the associated population attributable risks (PAR) of adverse outcomes in women with a history of malaria treatment to the risk in those without a history of malaria during the current pregnancy were examined to evaluate the temporal trends before and after DHP deployment.
Of 6,556 women enrolled with known pregnancy outcome, 1,018 (16%) reported prior anti-malarial treatment during their pregnancy. The proportion of women with malaria reporting treatment with DHP rose from 0% in 2004 to 64% (121/189) in 2010. In those with history of malaria during pregnancy, the increasing use of DHP was associated with a 54% fall in the proportion of maternal malaria at delivery and a 98% decrease in congenital malaria (from 7.1% prior to 0.1% after policy change). Overall policy change to more effective treatment was associated with an absolute 2% reduction of maternal severe anaemia and absolute 4.5% decrease in low birth weight babies.
Introduction of highly effective treatment in pregnancy was associated with a reduction of maternal malaria at delivery and improved neonatal outcomes. Ensuring universal access to arteminisin combination therapy (ACT) in pregnancy in an area of multidrug resistance has potential to impact significantly on maternal and infant health.
In Papua Province, Indonesia, Plasmodium falciparum and Plasmodium vivax infections in pregnancy are associated with adverse maternal and neonatal outcomes [1]. Malaria control in pregnancy in this region includes the use of long-lasting insecticide-treated nets (LLIN) and provision of early detection and prompt treatment with an effective anti-malarial drug. Prior to 2006 local guidelines recommended the use of sulfadoxine-pyrimethamine (SP) and chloroquine (CQ) for uncomplicated malaria. However clinical trials highlighted high levels of resistance to both compounds [2]. Whilst quinine is an alternative treatment in pregnancy, prolonged treatment regimens are required and these result in poor adherence. Unsupervised seven-day courses of quinine in non-pregnant women are associated with more than 50% recurrent infections within 28 days [2]. Recurrent febrile illness in pregnancy has major adverse effects on both mother and foetus including maternal anaemia, low birth weight, miscarriage and perinatal mortality [3–5].
In view of the extremely poor efficacy of standard anti-malarial treatment, the first-line treatment policy for uncomplicated malaria in Timika was changed in March 2006 to dihydroartemisinin–piperaquine (DHP) for all species of malaria and in the second and third trimesters of pregnancy [2, 6]. Patients with severe malaria were treated with intravenous artesunate.
In this population a review of more than 1,000 pregnant women, treated with DHP highlighted its safety profile and ability to reduce recurrent infections and malaria at delivery [3]. After more than 5 years of DHP deployment, the current study evaluates the temporal trends of maternal health and adverse pregnancy outcomes following policy change.
Study site
Until November 2008 the Rumah Sakit Mitra Masyarakat (RSMM) was the only hospital in the Mimika district, servicing a population of approximately 2,00,000. This lowland region is largely forested with a climate that varies little throughout the year. Rainfall in the lowlands is approximately 5,000 mm/year and varies little during the study period. Mean lowland temperatures are between 22 and 32°C. Vector control programme in this region covers approximately 40% of the population.
The area has unstable malaria transmission which an estimated annual incidence of 876 per 1,000 person years, divided 60:40 between P. falciparum and P. vivax infections [7]. Treatment failures in this area are extremely high, with 65% having recurrent failure on day 28 after CQ monotherapy for P. vivax and 48% on day 42 after CQ plus SP for P. falciparum [2].
Each year there are approximately 3,000 pregnant women in Timika, of whom less than 40% attend antenatal care clinic [8]. Household surveys suggest that half of these women deliver at the RSMM hospital and the same proportion sleep under insecticide-treated nets (ITN) (Mr. Usman, District Health Malaria Officer, personal communication-2009). Intermittent presumptive treatment (IPT) for malaria is not available and HIV testing was not routinely carried out during the study period.
The ethnicity of the population includes lowland and highland Papuans, as well as non-Papuan ethnic groups attracted to the region by the local mine. In view of the high number of infections in non-immune patients, local protocols recommend that patients with parasitaemia detected by blood film examination irrespective of symptoms should be treated with anti-malarial drugs; this policy extends to pregnant women.
Data were derived from a hospital-based malaria surveillance programme carried out between April 2004 and May 2010. Following informed consent all pregnant women and newborn infants admitted to the maternity ward at RSMM hospital were enrolled into the study. The attending physician or research clinician prospectively reviewed all admissions. A systematic interview and review of the clinical notes was used to obtain data on maternal characteristics, clinical condition (fever and signs of severity), history of possible malaria during the current pregnancy and its treatment, malaria treatment during hospitalization, and birth outcome (Additional file 1). All newborn infants were weighed and a full physical examination was conducted by a trained research nurse. Gestational age was estimated using the New Ballard Score [9].
Maternal thick and thin blood films were obtained to assess peripheral parasitaemia. Parasite counts were determined from the number of parasites per 200 white blood cells (WBC) on Giemsa-stained thick films and considered negative after review of 200 high-power fields. A thin smear was also examined to confirm parasite species and quantify parasitaemia if greater than 200 parasites were visible per 200 WBC. Haemoglobin concentration was determined by electronic coulter counter (Coulter JT™, USA). Malaria was defined as the presence of peripheral asexual parasitaemia irrespective of clinical signs or species. Maternal anaemia was defined as severe if the haemoglobin was less than 7 g/dl [10]. Neonatal adverse outcomes (low birth weight, preterm delivery and perinatal deaths) were defined according to WHO criteria [11]. On admission to hospital, fever was diagnosed if women gave a history of fever within the preceding 24 h or had an axillary temperature greater than 37.5°C.
Malaria was treated according to local protocols. Until March 2006 this was SP plus CQ for falciparum malaria and CQ alone for vivax malaria for the second and third trimesters of pregnancy. Quinine plus clindamycin was used for all species of malaria in the first trimester or when CQ and SP were not available. In March 2006 DHP became the recommended first-line treatment for uncomplicated malaria from any species of infection in the second and third trimester of pregnancy. Quinine and clindamycin were given in the first trimester. DHP (Artekin®, a fixed dose combination of 40 mg dihydroartemisinin and 320 mg piperaquine; Holley Pharmaceutical Co, PRC) was given according to the body weight with a target dose of 2–4 mg/kg of dihydroartemisinin and 16–32 mg/kg of piperaquine, once a day for 3 days. Following the publication of a multicentre severe malaria treatment trial in 2005, which included patients recruited from Timika, intravenous artesunate became the first-line treatment for severe malaria in pregnancy [12].
Data were entered into EpiData 3.02 software (EpiData Association, Odense, Denmark) and statistical analysis done using SPSS vs17.0 (SPSS Inc, Chicago, USA). Categorical data were compared by χ2 with Yates' correction or by Fisher's exact test. To control for potential temporal changes in the patient demographics, the relative risk (RR) of adverse pregnancy outcomes were derived for women with history of malaria treatment compared to those without prior malaria during the same period. The period of analysis was divided into six blocks of 12 months starting from April 2004 to May 2010, with March 2006 as the time when treatment policy changed.
The proportion of women with malaria during pregnancy and its associated RR were used to calculate the population attributable risk (PAR) of prior malaria during pregnancy for each adverse outcome before and after policy change by using the following formula:
$${\text{PAR}}:\,\,\frac{{{\text{proportion of population exposed }} \times \, \left( {{\text{RR}} - 1} \right)}}{{ 1 { } + \, \left[ {{\text{proportion of population exposed }} \times \, \left( {{\text{RR}} - 1} \right)} \right]}}$$
The ratio of relative risks (RRR) between two relative risks and its significance was calculated according to the method of Altman and Bland [13]. The 95% confidence interval for PAR was calculated according to the delta method [14].
Previous studies have demonstrated that pregnant women with history of malaria in this region are at significantly greater risk of having malaria at delivery and other adverse pregnancy outcomes compared to women without any history of malaria [1]. The impact of anti-malarial policy change will be most apparent in women treated for malaria during pregnancy, the magnitude of this impact dependent upon the proportion of women with malaria receiving the new treatment regimen. Therefore, overall temporal trends of the adverse outcomes were presented after stratifying by those with prior malaria illness.
In order to control for confounding factors in measuring impact of malaria treatment policy change to maternal and pregnancy outcomes, this study examined the seasonal variation in Mimika as well as the annual proportion of primigravidae and pregnant women aged ≤16 years old [1].
Ethical approval for this study was obtained from the ethics committees of the National Institute of Health Research and Development, Ministry of Health, Indonesia (KS.02.01.2.1.3431) and Menzies School of Health Research, Darwin, Australia (04/47).
In total, 7,744 pregnant women were enrolled in the study. Of the 6,556 (85%) women with known pregnancy outcomes, 286 were excluded (283 with miscarriage and three women with missing data). Of the remaining 6,270 pregnant women, 1,018 (16%) had a history of malaria treatment during the current pregnancy (Figure 1).
Study and analysis profile.
Over the duration of the study, the proportion of women delivering with a history of malaria and malaria treatment during pregnancy increased initially from 10% (94/931) in 2004 to 21% (218/1,057) in year 4 (April 2007–March 2008), p < 0.001, before decreasing to 14.6% (189/1,288) in year 6 (Figure 2). In women with a history of malaria treatment, the proportion receiving DHP rose from zero prior to policy change to 64% by the end of the study (Figure 2). The RR and the associated PAR of the adverse outcomes stratified by history of antenatal malaria treatment prior to and following treatment policy change are presented in Table 1.
Prior anti-malarial treatment during pregnancy in women delivering. Numbers above the pie charts are proportion of pregnant women with history of malaria treatment in that year.
Table 1 Adverse maternal and pregnancy outcomes before and after treatment policy change
There was no difference in the proportion of pregnant women who were primigravidae and aged less than 16 years old during the observation period (Figure 3). Women with a history of malaria treatment were more likely to be younger (age ≤ 16 years old) and first-time mothers compared to those without history of malaria illness: odds ratio of 1.5 (95% CI 1.02–2.2, p = 0.039) and 1.2 (95% CI 1.1–1.4, p = 0.008), respectively.
The yearly proportion of primigravidae (red line) and pregnant women age ≤16 years old (black line). The pie chart above the figures denote the proportions of women with prior anti-malarial exposure: Quinine: red, DHP: light green, CQ ± SP: light blue, AAQ: purple, others: dark blue.
Maternal adverse outcomes
There was a significant reduction in the prevalence of malaria at delivery following the introduction of DHP. In women with a history of prior malaria treatment, the proportion with malaria at delivery declined from 57% (142/248) before DHP deployment to 26% (201/770) after policy change, p < 0.001 (Figure 4).
Maternal adverse outcomes in women with history of malaria treatment during pregnancy (blue line) and in those without history of malaria treatment (orange line). The pie chart above the figures denote the proportions of women with prior anti-malarial exposure: Quinine: red, DHP: light green, CQ ± SP: light blue, AAQ: purple, Others: dark blue. Numbers above the lines are RR (95% CI). Black arrow marks the start of policy change.
A similar pattern was noted in the prevalence of severe maternal anaemia (Figure 4), the risk of severe anaemia declining steadily over the 5 years of the study in both women with and without a history of malaria treatment. The proportion of women with severe anaemia declined from 10.5% (192/1,829) before DHP deployment to 7.2% (307/4,248) after policy change, p < 0.001 (Figure 4). The RR associated with history of malaria during pregnancy changing from 1.7 (95% CI 1.3–2.4) prior to policy change to 1.4 (95% CI 1.1–1.8) after policy change [RRR: 2.6 (95% CI 2.1–3.3), p < 0.001]. This equated to a change in PAR of 9% (95% CI 4–13%) pre- and 7% (95% CI 3–10%) post-policy change.
Adverse outcomes of the newborn
The proportion of low birth weight (LBW) babies born of mothers without a history of malaria was 14% (736/5,223) and did not change significantly over the study period. However in mothers with prior malaria treatment the prevalence of LBW fell significantly from 20% (50/244) in the first 2 years of the study, to 12% (23/188) in year 6, p = 0.03 (Figure 5). Prior to DHP deployment, the risk of LBW in women with a history of malaria treatment was 1.5 (95% CI 1.2–2.0) compared to 1.1 (95% CI 1–1.3) after policy change [RRR: 1.4 (95% CI 1–1.9), p = 0.05]. The PAR of LBW associated with prior malaria treatment decreased from 6.6% (95% CI 3–10%) to 2% (95% CI 0.1–4%) following treatment policy change.
Adverse pregnancy outcomes in pregnant women with history of malaria treatment (blue line) and in those without history of malaria treatment (orange line). The pie chart above the figures denote the proportions of women with prior anti-malarial exposure: Quinine: red, DHP: light green, CQ ± SP: light blue, AAQ: purple, Others: dark blue. Numbers above the lines are RR (95% CI). Black arrow marks the start of policy change.
The prevalence of preterm delivery rose significantly prior to DHP deployment, but was followed by a decline after year 3 (Figure 5). Although these trends were apparent in both women with and without a history of malaria, the change was greater in women with a history of malaria. The RR of prematurity associated with prior history of malaria fell from 1.6 [95% CI 1.1–2.4] to 1.2 [95% CI 1–1.5] after policy change [RRR: 1.3 (95% CI 0.9–2.1), p = 0.199], the PAR decreasing from 7.8% (95% CI 3–13%) to 3.8% (95% CI 2–6%).
Over the study period perinatal death was recorded in 250 (4%) of the 6,273 women delivering. There was no significant change in the risk of perinatal death associated with prior anti-malarial treatment; overall approximately 4.3% (44/1,018) in women with history of malaria treatment versus 4% (206/5,252) in those women without prior anti-malarial exposure, p = 0.54.
After March 2006, the relative risk of maternal to foetal malaria transmission associated with history of malaria treatment declined from 3.8 (95% CI 1.8–8.2) to 0.8 (95% CI 0.1–6.6), p = 0.17.
The local Ministry of Health revised the anti-malarial policy in Timika in March 2006, and since this time the public health care sector has knowingly treated more than 2,900 pregnant women with DHP. At the sentinel site at the RSMM the clinical research team have documented DHP administration to 765 women, and recorded deliveries in 847 women known to have been exposed to DHP during pregnancy [3]. In the current analysis this surveillance network is used to evaluate the impact of the policy change on adverse pregnancy and foetal outcomes at the hospital maternity ward.
This study documented an initial rise in the number of women delivering with history of malaria and malaria treatment during pregnancy from 11% (79/693) in 2004 to 20% (208/1,026) in 2007. During this period community and entomology unpublished studies conducted in parallel also demonstrated a paradoxical transient increase in the number of malaria cases in the general population. This increase in malaria coincided with an El Nino year, with a rise in vector numbers and malaria cases noted across Indonesia in the national statistics. Unpublished evidence also suggests additional effects from a shift in treatment-seeking behaviour resulting in improved surveillance following DHP introduction. Despite the increased number of pregnant women presenting with malaria after the introduction of DHP, no corresponding increased risk in adverse outcomes was observed.
The interpretation of temporal trends of malaria-related morbidity is complex. Treatment, patient demographics and health-seeking behaviour of women presenting to the antenatal clinics vary over time. The preliminary analyses of the temporal data in this study were based on simple before-and-after comparisons and did not take into account the trajectory of trends nor the likely autocorrelation of the data. For these reasons, interpretation of the study findings requires caution.
In women with antenatal malaria, the introduction of DHP was correlated with a 50% reduction in the risk of peripheral parasitaemia at delivery from 57 to 26%. The RR of malaria at delivery associated with prior treatment was 4.6 (95% CI 3.9–5.5) before policy change to 1.9 (95% CI 1.6–2.2] after DHP deployment, p < 0.001. The PAR of malaria at delivery was 34% (95% CI 29–40%) before March 2006 and 13% (95% CI 10–16%) thereafter. This figure shows that 21% of maternal malaria at delivery could be prevented by treatment policy change. The reduction in the maternal malaria prevalence associated with policy change was similar between primi and multigravidae women.
The fall in the proportion of women with peripheral parasitaemia at delivery was associated with a dramatic reduction in the risk of congenital malaria from 3 to 0.2% [15]. Indeed, there have been no more congenital cases of malaria since October 2008.
There was a significant reduction in the RR of LBW and a non-significant fall in severe maternal anaemia associated with antenatal malaria. The corresponding PAR fell from 7 to 2% and 9 to 7%, respectively. LBW and severe maternal anaemia are associated with frequent malaria episodes and the timing of infections [16], and it is likely that the reduction in these outcomes is due in part at least, to a reduction in the risk of recurrent malaria. The effect however was modest and this may reflect the limited access of mothers in this region to effective anti-malarial treatment. Inadequate quality of antenatal care would also compromise the pregnancy and health outcomes of pregnant women with relapsing P. vivax infections, which accounts for a third of maternal malaria in this area [1]. The introduction of DHP was associated with a slight reduction in the risk of preterm delivery, however an unexplained increased in the proportion of this adverse outcome before policy change confounds the interpretation of this observation.
This study has several limitations. Firstly, it was not possible to evaluate the overall programme impact in the community since this study used hospital-based surveillance. Although 76% of women came to the hospital for delivery, selection bias and treatment-seeking behaviour remain potential confounders of the analysis. However, the condition was controlled by comparing the RR of adverse outcomes in women with and without a history of malaria treatment.
Secondly, a history of symptomatic malaria and anti-malarial treatment during pregnancy was obtained by interview, a method that is subjected to recall bias. However, pregnant women in this region are very familiar with malaria symptoms and alternative anti-malarial drugs. A systematic interview method and review of the clinical notes revealed that the history of malaria treatment could be cross-checked in almost 50% of women with excellent concordance between reported drug history and that documented from the records. Unfortunately, the interview method did not gather information on possible asymptomatic and untreated malaria in pregnant women, and this is likely to have underestimated the effectiveness of treatment policy change.
The study highlights the potential benefits of improving maternal and pregnancy outcomes in women through the introduction of more efficacious anti-malarial treatment regimens. However, the wider impact on health targets will require improvements in case management and prevention programmes of malaria in pregnancy as well as implementation of strategies to ensure the delivery of universal coverage [17, 18]. In Papua, even though malaria control programmes in pregnancy focus on providing early diagnosis and prompt treatment, in reality this capacity is available in less than half of the local health facilities with limited public health impact [7]. The challenges in the prevention of malaria are similar, highlighted by an ITN distribution programme to pregnant women in Timika district, initiated in 2007, which has achieved only 50% coverage.
The deployment of DHP for malaria in the second and third trimesters of pregnancy was associated with a significant decrease in adverse maternal and infant outcomes. This study paves the way for scaling up both treatment and prevention programmes using arteminisin combination therapy. The latter will require identifying the most effective method of service delivery and translating research into practice (Additional file 1).
Poespoprodjo JR, Fobia W, Kenangalem E, Lampah DA, Warikar N, Seal A et al (2008) Adverse pregnancy outcomes in an area where multidrug-resistant plasmodium vivax and Plasmodium falciparum infections are endemic. Clin Infect Dis 46:1374–1381
PubMed Central PubMed Article Google Scholar
Ratcliff A, Siswantoro H, Kenangalem E, Wuwung M, Brockman A, Edstein MD et al (2007) Therapeutic response of multidrug-resistant Plasmodium falciparum and P. vivax to chloroquine and sulfadoxine-pyrimethamine in southern Papua, Indonesia. Trans R Soc Trop Med Hyg 101:351–359
CAS PubMed Central PubMed Article Google Scholar
Poespoprodjo JR, Fobia W, Kenangalem E, Lampah DA, Sugiarto P, Tjitra E et al (2014) Dihydroartemisinin–piperaquine treatment of multidrug resistant falciparum and vivax malaria in pregnancy. PLoS One 9:e84976
Nosten F, ter Kuile F, Maelankirri L, Decludt B, White NJ (1991) Malaria during pregnancy in an area of unstable endemicity. Trans R Soc Trop Med Hyg 85:424–429
Desai M, ter Kuile FO, Nosten F, McGready R, Asamoa K, Brabin B et al (2007) Epidemiology and burden of malaria in pregnancy. Lancet Infect Dis 7:93–104
Ratcliff A, Siswantoro H, Kenangalem E, Maristela R, Wuwung RM, Laihad F et al (2007) Two fixed-dose artemisinin combinations for drug-resistant falciparum and vivax malaria in Papua, Indonesia: an open-label randomised comparison. Lancet 369:757–765
Karyana M, Burdarm L, Yeung S, Kenangalem E, Wariker N, Maristela R et al (2008) Malaria morbidity in Papua Indonesia, an area with multidrug resistant Plasmodium vivax and Plasmodium falciparum. Malar J 7:148
Mimika DHO (2005) Mimika District Health Office Annual Statistics
Ballard JL, Khoury JC, Wedig K, Wang L, Eilers-Walsman BL, Lipp R (1991) New Ballard Score, expanded to include extremely premature infants. J Pediatr 119:417–423
Shulman CE, Graham WJ, Jilo H, Lowe BS, New L, Obiero J et al (1996) Malaria is an important cause of anaemia in primigravidae: evidence from a district hospital in coastal Kenya. Trans R Soc Trop Med Hyg 90:535–539
Catalogue of health indicators: a selection of important health indicators recommended by WHO programmes (http://www.who.int/hac/techguidance/tools/en/SelectedHealthIndicators.pdf). Accessed 3 March 2006
Dondorp A, Nosten F, Stepniewska K, Day N, White N, Group S (2005) Artesunate versus quinine for treatment of severe falciparum malaria: a randomised trial. Lancet 366:717–725
Altman DG, Bland JM (2003) Interaction revisited: the difference between two estimates. BMJ 326:219
Hildebrandt M, Bender R, Gehrmann U, Blettner M (2006) Calculating confidence intervals for impact numbers. BMC Med Res Methodol 6:32
Poespoprodjo JR, Fobia W, Kenangalem E, Hasanuddin A, Sugiarto P, Tjitra E et al (2011) Highly effective therapy for maternal malaria associated with a lower risk of vertical transmission. J Infect Dis 204:1613–1619
Kalilani L, Mofolo I, Chaponda M, Rogerson SJ, Meshnick SR (2010) The effect of timing and frequency of Plasmodium falciparum infection during pregnancy on the risk of low birth weight and maternal anemia. Trans R Soc Trop Med Hyg 104:416–422
WHO/AFRO (2004) A strategic framework for malaria prevention and control during pregnancy in the African region. World Health Organization Regional Office for Africa
Brabin BJ, Warsame M, Uddenfeldt-Wort U, Dellicour S, Hill J, Gies S (2008) Monitoring and evaluation of malaria in pregnancy—developing a rational basis for control. Malar J. 7(Suppl 1):S6
JRP, RNP and ET conceived the study; JRP wrote the first draft of the manuscript; JRP and RNP analysed the data; RNP, NMA, WF, DAL, and PS reviewed the manuscript. All authors read and approved the final manuscript.
We are grateful to Mimika District Government, Lembaga Pengembangan Masyarakat Amungme Kamoro, the staff of Timika research facility, Dr Maurits J Okoseray and Pak Erens Meokbun, Heads of the District Health Office and Professor Yati Soenarto from Department of Child Health, Faculty of Medicine Gadjah Mada University. We also thank Dr Rose McGready for her support and advice in carrying out the study and analysis. The study was funded by the Wellcome Trust through a Senior Fellowship in Clinical Science awarded to RNP— 091625, Training Fellowship in Tropical Medicine awarded to JRP—099875 and National Health and Medical Research Council of Australia (Practitioner Fellowship to NMA, 1042072). The Timika Research Facility and Papuan Health and Community Development Foundation were supported by AusAID (Australian Agency for International Development, Department of Foreign Affairs and Trade) and the National Health and Medical Research Council of Australia (Program Grant 1037304).
Compliance with ethical guidelines
Competing interests The authors declare that they have no competing interests.
Mimika District Health Authority, District Government Building, Jl. Cendrawasih, Timika, 99910, Papua, Indonesia
Jeanne Rini Poespoprodjo & Enny Kenangalem
Timika Malaria Research Programme, Papuan Health and Community Development Foundation, Jl. SP2-SP5, RSMM Area, Timika, 99910, Papua, Indonesia
Jeanne Rini Poespoprodjo, Wendelina Fobia, Enny Kenangalem & Daniel A Lampah
Department of Child Health, Faculty of Medicine, University Gadjah Mada, Jl. Kesehatan no 1, Sekip, Yogyakarta, 55284, Indonesia
Jeanne Rini Poespoprodjo
Mitra Masyarakat Hospital, Jl. SP2-SP5-Charitas, Timika, 99910, Indonesia
Paulus Sugiarto
National Institute of Health Research and Development, Ministry of Health, Jl. Percetakan Negara, Jakarta, 10560, Indonesia
Emiliana Tjitra
Global and Tropical Health Division, Menzies School of Health Research, Charles Darwin University, Darwin, PO Box 41096, Casuarina, NT, 0811, Australia
Nicholas M Anstey & Richard N Price
Division of Medicine, Royal Darwin Hospital, Darwin, NT, 0810, Australia
Nicholas M Anstey
Nuffield Department of Clinical Medicine, Centre for Tropical Medicine and Global Health, University of Oxford, Oxford, OX37LJ, UK
Richard N Price
Wendelina Fobia
Enny Kenangalem
Daniel A Lampah
Correspondence to Jeanne Rini Poespoprodjo.
Pregnant women form.
Poespoprodjo, J.R., Fobia, W., Kenangalem, E. et al. Treatment policy change to dihydroartemisinin–piperaquine contributes to the reduction of adverse maternal and pregnancy outcomes. Malar J 14, 272 (2015). https://doi.org/10.1186/s12936-015-0794-0
Dihydroartemisinin–piperaquine
Maternal malaria
Pregnancy outcome | CommonCrawl |
\begin{document}
\begin{abstract} Theories of classification distinguish classes with some good structure theorem from those for which none is possible. Some classes (dense linear orders, for instance) are non-classifiable in general, but are classifiable when we consider only countable members. This paper explores such a notion for classes of computable structures by working out a sequence of examples.
We follow recent work by Goncharov and Knight in using the degree of the isomorphism problem for a class to distinguish classifiable classes from non-classifiable. In this paper, we calculate the degree of the isomorphism problem for Abelian $p$-groups of bounded Ulm length. The result is a sequence of classes whose isomorphism problems are cofinal in the hyperarithmetical hierarchy. In the process, new back-and-forth relations on such groups are calculated. \end{abstract}
\maketitle
\section{Introduction}
In an earlier paper \cite{ecl1}, we began to consider a notion of ``classification" for classes of computable structures. For some classes, there is a ``classification," or ``structure theorem" of some kind. For instance, the classification of algebraically closed fields states that a single cardinal (the transcendence degree) completely determines the structure up to isomorphism. For other classes (graphs, for example, or arbitrary groups) such a result would be surprising, and when we introduce the necessary rigor we can prove that there is none to be found. They simply have more diversity than any structure theorem could describe.
We assume all structures have for a universe some computable subset of $\omega$ and identify a structure with its atomic diagram. Thus, for instance, a structure is computable if and only if its atomic diagram is computable, as a set of G\"{o}del numbers of sentences. Alternatively, we could use the quantifier-free diagram instead of the atomic diagram. Similarly, a structure is associated with the index of a Turing machine which enumerates its atomic diagram (assuming its universe is computable). In this paper, I will write $\mathcal{A}_a$ for the computable structure with atomic diagram $W_a$ and will always assume that a class $K$ of structures has only computable members. The following definition was recently proposed by Goncharov and Knight \cite{gk}.
\begin{dfn}The isomorphism problem, denoted $E(K)$, is the set \[\{ (a,b) | \mathcal{A}_a , \mathcal{A}_b \in K\mbox{, and }\mathcal{A}_a \simeq \mathcal{A}_b\}\]\end{dfn}
If the set of indices for computable members of $K$, denoted $I(K)$, is hyperarithmetical, then $E(K)$ is $\Sigma^1_1$. Intuitively, in the worst case, where $E(K)$ is properly $\Sigma^1_1$, the easiest way to say that two members of $K$ are isomorphic is to say, ``There exists a function which is an isomorphism between them." Often there are easier ways to check isomorphism, such as counting basis elements of vector spaces. Such a ``shortcut" is a classification. There is also a natural ``floor" to the complexity of $E(K)$, since to say that $a \in I(K)$ requires saying that $a$ is an index for some structure, which is already $\Pi^0_2$.
This notion is closely related to work in descriptive set theory, originating in the work of Friedman and Stanley \cite{fs}. In that context, the set of countable models of a theory is viewed as a topological space, and we would calculate the topological complexity of the isomorphism relation as a subset of the Cartesian product of two copies of the space (for a more complete description of the topological situation, see \cite{hjbook}). Many of the proofs that a class has maximal complexity, like the Friedman -- Stanley proof of the Borel completeness of fields \cite{fs}, require only minor modification.
Several classes are well-known to have maximally complicated isomorphism problems. The following theorem summarizes several classical results. Proofs may be found in articles by Rabin and Scott \cite{rabsco}, Goncharov and Knight \cite{gk}, Morozov \cite{moro}, and Nies \cite{nies}. \begin{thm} \label{cla} If $K$ is the set of computable members of any of the following classes, then $E(K)$ is $\Sigma^1_1$ complete: \begin{enumerate} \item Undirected graphs \item Linear orders \item Trees \item Boolean algebras \item Abelian $p$-groups \end{enumerate} \end{thm}
The following additions to the list follow easily from recent work by Hirschfeldt, Khoussainov, Shore, and Slinko \cite{hkss}. \begin{thm} [Hirschfeldt -- Khoussainov -- Shore -- Slinko] \label{hksst}If $K$ is the set of computable members of any of the following classes, then $E(K)$ is $\Sigma^1_1$ complete: \begin{enumerate} \item Rings \item Distributive lattices \item Nilpotent groups \item Semigroups \end{enumerate} \end{thm}
In an earlier paper \cite{ecl1}, the following were added: \begin{thm} \label{ecf} \rule{0in}{0in} \newline
\begin{enumerate} \item If $K$ is the set of computable members of any of the following classes, then $E(K)$ is $\Sigma^1_1$ complete:
\begin{enumerate}
\item Fields of any fixed characteristic
\item Real Closed Fields
\end{enumerate} \item If $K$ is the set of computable members of any of the following classes, then $E(K)$ is $\Pi^0_3$ complete:
\begin{enumerate}
\item Vector spaces over a fixed computable field
\item Algebraically closed fields of fixed characteristic
\item Archimedean real closed fields
\end{enumerate} \end{enumerate} \end{thm}
In this paper, the complexity of the isomorphism problem will be calculated for other classes. Two major goals which are partially achieved here are the answers to the following questions: \begin{Q} What are the possible complexities of the isomorphism problem for classes of structures? \end{Q} \begin{Q} Do classes with high complexity acquire it all at once?\end{Q} Considering Abelian $p$-groups of bounded Ulm length will give us a sequence of isomorphism problems whose degrees are cofinal in the hyperarithmetical degrees. In some sense, this also shows a smooth transition from very low complexity (say, $\Pi^0_3$ complete) to the ``non-classifiable" (that is, properly $\Sigma^1_1$).
\section{Notation and Terminology for Abelian $p$-groups}
Let $p$ be an arbitrary prime number. Abelian $p$-groups are Abelian groups in which each element has some power of $p$ for its order. We will consider only countable Abelian $p$-groups. These groups are of particular interest because of their classification up to isomorphism by Ulm. For a classical discussion of this theorem and a more detailed discussion of this class of groups, consult Kaplansky's book \cite{kaplansky}. Generally, notation here will be similar to Kaplansky's.
It is often helpful to follow L.\ Rogers \cite{lrogers} in representing these groups by trees. Consider a tree $T$. The Abelian $p$-group $G(T)$ is the group generated by the nodes in $T$ (among which the root is $0$), subject to the relations stating that the group is Abelian and that $px$ is the predecessor of $x$ in the tree. Reduced Abelian $p$-groups, from this perspective, are represented by trees with no infinite paths.
The idea of Ulm's theorem is that it generalizes the notion that to determine a finitely generated torsion Abelian group it is only necessary to determine how many cyclic components of each order are included in a direct sum decomposition. Let $G$ be an Abelian $p$-group. We will produce an ordinal sequence (usually transfinite) of cardinals $u_{\beta}(G)$ (each at most countable), which is constant after some ordinal (called the ``length" of $G$). If $H$ is also an Abelian $p$-group and for all $\beta$ we have $u_{\beta} (G) = u_{\beta} (H)$, then $H \simeq G$ (this is still subject to another condition we have yet to define).
First set $G_0 = G$. Now we inductively define $G_{\beta + 1} = pG_{\beta} = \{ px | x \in G_{\beta}\}$, where $px$ denotes the sum of $x$ with itself $p$ times. We also define, for limit $\beta$, the subgroup $G_{\beta} = \bigcap\limits_{\gamma < \beta} G_{\gamma}$. Further, let $P(G)$ denote the subgroup of elements $x$ for which $px = 0$, and let $P_{\beta} (G) = P \cap G_{\beta}$. Now the quotient $P_{\beta} (G)/ P_{\beta + 1}(G)$ is a $\mathbb{Z}_p$ vector space, and we call its dimension $u_{\beta} (G)$. Where no confusion is likely, we will omit the argument $G$ and simply write $P_{\beta}$, and so forth.
For any Abelian $p$-group $G$, there will be some least ordinal $\lambda (G)$ such that $G_{\lambda (G)} = G_{\lambda (G) + 1}$. This is called the \textit{length} of $G$. If $G_{\lambda (G)} = \{0\}$, then we say that $G$ is \textit{reduced}. Equivalently, $G$ is reduced if and only if it has no divisible subgroup. The \textit{height} of an element $x$ is the unique $\beta$ such that $x \in G_{\beta}$, but $x \notin G_{\beta + 1}$. It is conventional to write $h(0) = \infty$, where $\infty$ is greater than any ordinal. Similarly, if our group contains a divisible element $x$, we write $h(x) = \infty$. In the course of this paper, we will only consider reduced groups. When $G$ is a direct sum of cyclic groups, $u_n (G)$ is exactly equal to the number of direct summands of order $p^{n+1}$. We can now state Ulm's theorem, but we will not prove it here.
\begin{thm} [Ulm] Let $G$ and $H$ be reduced countable Abelian $p$-groups. Then $G \simeq H$ if and only if for every countable ordinal $\beta$ we have $u_{\beta}(G)~=~u_{\beta}(H)$. \end{thm}
It is interesting to note that this theorem is not ``recursively true." Lin showed that if two computable groups satisfying the hypotheses of this theorem have identical Ulm invariants, they may not be computably isomorphic \cite{clin}. However, it is known that (depending heavily on the particular statement of the theorem), Ulm's theorem is equivalent to the formal system $\mbox{ATR}_0$ \cite{frsimsm, simpson}. Related work from a constructivist perspective may be found in a paper by Richman \cite{richman}.
A calculation of the complexity of the isomorphism problem for special classes of computable reduced Abelian $p$-groups is essentially a computation of the complexity of checking the equality of Ulm invariants. Given some computable ordinal $\alpha$, we will consider the class of reduced Abelian $p$-groups of length at most $\alpha$.
\section{Bounds on Isomorphism Problems} When we begin to consider special classes of Abelian $p$-groups from the perspective described in section 1, it quickly becomes apparent that all examples in Theorems \ref{cla}, \ref{hksst}, and \ref{ecf} were especially nice ones. In all of these cases, $I(K)$ was $\Pi^0_2$ and $E(K)$ was something worse. It is easy to see that $I(K) \leq_T E(K)$, since
$$I(K) = \{a | (a, a) \in E(K)\}$$
For instance, if $K$ is the class of reduced Abelian $p$-groups of length at most $\omega$, $I(K)$ is $\Pi^0_3$ complete. Then to show that $E(K)$ is $\Pi^0_3$ complete, it is enough to show that $E(K)$ is $\Pi^0_3$, and this is not difficult (the reader interested in the details of this may wish to glance ahead to Proposition \ref{komega}).
However, this doesn't tell us whether $E(K)$ has high complexity ``on its own," or just by virtue of it being hard to tell whether we have something in $K$. In a talk in Almaty in the summer of 2002, J.\ Knight proposed the following definition to clear up the distinction: \begin{dfn} Suppose $A \subseteq B$. Let $\Gamma$ be some complexity class (e.g.\ $\Pi^0_3$), and $K$ a class of computable structures. Then $A$ is $\Gamma$ \emph{within} $B$ if and only if there is some $R \in \Gamma$ such that $A = R \cap B$ \end{dfn}
In the example above, saying that $E(K)$ is $\Pi^0_3$ within $I(K) \times I(K)$ means that there is a $\Pi^0_3$ relation $R(a,b)$ such that if $a$ and $b$ are indices for computable reduced Abelian $p$-groups, then $R(a,b)$ defines the relation ``$\mathcal{A}_a$ has the same Ulm invariants as $\mathcal{A}_b$." In general, it is possible that $A$ is not $\Gamma$ but that $A$ is $\Gamma$ \textit{within} $B$. Consider for instance the case of a theory which is $\aleph_0$-categorical. If $K$ is the class of models of such a theory, then $E(K)$ is not computable, but $E(K)$ is computable within $I(K) \times I(K)$.
We can also define a reducibility ``within $B$", which will, in turn, give us a notion of completeness.
\begin{dfn} Let $A, B$, and $\Gamma$ be as in the previous definition. \begin{enumerate} \item $S \leq_m A$ \emph{within} $B$ if there is a computable $f: \omega \to B$ such that for all $n$, $n \in S \iff f(n) \in A$. \item $A$ is $\Gamma$ \emph{complete within} $B$ if $A$ is $\Gamma$ within $B$ and for any $S \in \Gamma$ we have $S \leq_m A$ within $B$. \end{enumerate} \end{dfn}
Essentially, this definition says that $A$ is $\Gamma$ complete within $B$ if it is $\Gamma$ within $B$ and there is a function witnessing that it is $\Gamma$ complete which only calls for questions about things in $B$. In fact, the questions are only about members of a c.e.\ subset of $B$. We will usually write ``within $K$" for ``within $I(K) \times I(K)$." All results stated in section 1 remain true when we add ``within $K$" to their statements, and the original proofs still work. In fact, this is intuitively the ``right" way to say that the structure of a class is complicated: we say that if we look at some members, it is difficult to tell whether they are isomorphic. It would be unconvincing to argue that the structure of a class is complicated simply because it is difficult to tell whether things are in the class or not.
For any computable ordinal $\alpha$, it is somewhat straightforward to write a computable infinitary sentence stating that $G$ is a reduced Abelian $p$-group of length at most $\alpha$ and that $G$ and $H$ have the same Ulm invariants up to $\alpha$. In particular, Barker \cite{barker} verified the following.
\begin{lem} Let $G$ be a computable Abelian $p$-group. \begin{enumerate} \item $G_{\omega \cdot \alpha}$ is $\Pi^0_{2 \alpha}$. \item $G_{\omega \cdot \alpha + m}$ is $\Sigma^0_{2 \alpha + 1}$. \item $P_{\omega \cdot \alpha}$ is $\Pi^0_{2 \alpha}$. \item $P_{\omega \cdot \alpha + m}$ is $\Sigma^0_{2 \alpha + 1}$. \end{enumerate} \end{lem}
\begin{proof} It is easy to see that 3 and 4 follow from 1 and 2 respectively. Toward 1 and 2, note the following: \begin{eqnarray*} x \in G_m & \iff & \exists y (p^m y = x)\\ x \in G_{\omega} & \iff & \bigwedge\limits_{m \in \omega} \hspace{-0.15in}\bigwedge \exists y (p^m y = x)\\ x \in G_{\omega \cdot \alpha + m} & \iff & \exists y [p^m y = x \wedge G_{\omega \cdot \alpha}(y)]\\ x \in G_{\omega \cdot \alpha + \omega} & \iff & \bigwedge\limits_{m \in \omega} \hspace{-0.15in}\bigwedge \exists y [p^m y = x \wedge G_{\omega \cdot \alpha}(y)]\\ x \in G_{\omega \cdot \alpha} & \iff & \bigwedge\limits_{\gamma < \alpha} \hspace{-0.15in}\bigwedge G_{\omega \cdot \gamma} (x) \mbox{ for limit $\alpha$}\\ \end{eqnarray*} \end{proof}
Work by Lin \cite{linjsl}, when viewed from our perspective, shows that for any $m \in \omega$, there is a group $G$ in which $G_m$ is $\Sigma^0_1$ complete. Given this lemma, we can place bounds on the complexity of $I(K)$ and $E(K)$.
\begin{lem} If $K_{\alpha}$ is the class of reduced Abelian $p$-groups of length at most $\alpha$, then $I(K_{\omega \cdot \beta + m})$ is $\Pi^0_{2 \beta + 1}$.\end{lem}
\begin{proof} The class $K_{\omega \cdot m + \beta}$ may be characterized by the axioms of an Abelian $p$-group (which are $\Pi^0_2$), together with the condition $$\forall x [x \in G_{\omega \cdot \beta +m} \rightarrow x = 0]$$ Since the previous lemma guarantees that this condition is $\Pi^0_{2 \beta +1}$, we know that $I(K_{\omega \cdot \beta +m})$ is also $\Pi^0_{2 \beta +1}$. \end{proof}
\begin{lem} \label{ebnd} If $K_\alpha$ is as in the previous lemma, we use $\hat{\alpha}$ to denote $\mbox{$\sup\limits_{\omega \cdot \gamma < \alpha} (2 \gamma + 3)$}$. Then $E(K_{\alpha})$ is $\Pi^0_{\hat{\alpha}}$ within $K$.\end{lem}
\begin{proof} Note that the relation ``there are at least $n$ elements of height $\beta$ which are $\mathbb{Z}_p$-independent over $G_{\beta + 1}$" is defined in the following way. To say that $x_i, \dots, x_n$ are $\mathbb{Z}_p$-independent over $G_{\beta+1}$, we write the computable $\Pi^0_{2 \beta + 1}$ formula \[D_{n, \beta}(x_1, \dots, x_n) = \bigwedge\limits_{b_1, \dots b_n \in \mathbb{Z}_p} (\sum\limits_{i = 1}^n b_i x_i \notin G_{\beta+1})\] Now to write ``there are at least $n$ independent elements of height $\beta$ and order $p$," we use the sentence \[B_{n, \beta} = \exists x_1, \dots, x_n [(\bigwedge\limits_{i = 1}^n G_\beta (x_i)) \wedge (\bigwedge\limits_{i=1}^n px_i = 0)) \wedge D_{n, \beta} (\overline{x})]\] which is a computable $\Sigma^0_{2 \beta + 2}$ sentence. Now we can define isomorphism by \[\bigwedge\limits_{\rule{0.05in}{0in} n\in\omega \atop \rule{0.05in}{0in}\beta<\alpha}\hspace{-0.18in}\bigwedge \mathcal{A}_a \models B_{n,\beta} \Leftrightarrow \mathcal{A}_b \models B_{n,\beta}\] We write each $\beta < \alpha$ as $\beta = \omega \cdot \gamma + m$, where $m \in \omega$. If $\hat{\alpha}$ is as defined in the statement of the lemma, then this can be expressed by a computable $\Pi^0_{\hat{\alpha}}$ sentence. \end{proof}
\section{Completeness for Length $\omega \cdot m$}
\begin{prop} \label{komega} If $K_\omega$ is the class of computable Abelian $p$-groups of length at most $\omega$, then $E(K_{\omega})$ is $\Pi^0_3$ complete within $K_\omega$.\end{prop}
\begin{proof} We first observe that the set is $\Pi^0_3$ within $K$, by applying the previous lemma. Now let $S = \forall e \exists \tilde{y} \forall z \overline{R}(n,e,y,z)$ be an arbitrary $\Pi^0_3$ set. We can represent $S$ as the set defined by \[\forall e \exists^{<\infty} y \hspace{0.05in}R(n,e,y)\] where $\exists^{<\infty}$ is read ``there exist at most finitely many." Consider the Abelian $p$-group $G^{\omega}$ with Ulm sequence \[u_\alpha = \left\{\begin{array}{ll}
\omega & \mbox{if $\alpha < \omega$}\\
0 & \mbox{otherwise}\\ \end{array} \right. \]
We will build a uniformly computable sequence $H^n$ of reduced Abelian $p$~-~groups of height at most $\omega$ such that $H^n \simeq G^{\omega}$ if and only if $n \in S$. Let $G^{\omega,\infty}$ denote the direct sum of countably many copies of the smallest divisible Abelian $p$-group $\mathbb{Z}(p^\infty)$, and note that $G^{\omega, \infty}$ has a computable copy, as a direct sum of copies of a subgroup of $\mathbb{Q} / \mathbb{Z}$. We will denote the element where $x$ occurs in the $i$th place with zeros elsewhere by $(x)_i$. For instance, $G^{\omega, \infty}$ set-wise is the collection of all sequences of proper fractions whose denominators are powers of $p$, and the element $(\frac{1}{p})_2$ denotes the element $(0, \frac{1}{p}, 0, 0, \dots)$.
List the atomic sentences by $\phi_e$, the pairs of elements in $G^{\omega, \infty}$ by $\xi_e$, and set $D_{-1} = C_{-1} = Y_{e, -1} = X_{e, -1} = \tilde{X}_{e,-1} = T_{e, -1} = \emptyset$. We will build groups to meet the following requirements:
\begin{tabular}{rl} $P_e$ : & There are infinitely many independent elements $x \in H^n$ of order $p$ and\\
& height exactly $e$ if and only if there are at most finitely many $y$\\
& such that $R(n,e,y)$.\\ $Q_e$ : & If $\xi_e = (a,b)$ and $a, b \in H^n$, then $a+b \in H^n$.\\ $Z_e$ : & If all parameters occurring in $\phi_e$ are in $H^n$, then exactly one of\\
& $\phi_e \in D$ or $\lnot \phi_e \in D$.\\ \end{tabular}
Roughly speaking, $D_s$ will be the diagram of $H^n$, and $C_s$ will be its domain. For each $e$, the set $Y_{e,s}$ will keep track of the $y$ already seen, $X_{e,s}$ the $x$ created of height at least $e$, and $\tilde{X}_{e,s}$ the $x$ which are given greater height, as in $P_e$. The set $T_{e,s}$ will keep track of the heights greater than $e$ already used to put elements from $X_e$ in $\tilde{X}_e$, so that we do not accidentally make infinitely many elements of height $e+1$.
We say that $P_e$ requires attention at stage $s$ if there is some $y<s$ such that $y \notin Y_{e, s-1}$ and $R(n,e,y)$ and there is also some $x \in X_{e,s-1} \setminus \tilde{X}_{e,s-1}$, or if for all $y <s$ we have either $y \in Y_{e,s-1}$ or $\lnot R(n,e,y)$. We say that $Q_e$ requires attention at stage $s$ if $\xi_e = (a,b)$ and $a, b \in C_{s-1}$ but $a+b \notin C_{s-1}$. We say that $Z_e$ requires attention at stage $s$ if all parameters that occur in $\phi_e$ are in $C_{s-1}$ and $D_{s-1}$ does not include either $\phi_e$ or $\lnot \phi_e$.
At stage $s$, to satisfy $P_e$, we will act by first looking for some $y<s$ such that $y \notin Y_{e, s-1}$ and $R(n,e,y)$. If none is found, the action will be to enumerate a new independent $x$ of height at least $e$. To do this, find the first $k$ such that $(\frac{1}{p})_{k}$ does not occur in $C_{s-1}$ or in any element of $D_{s-1}$. Let
\[C_{s} = C_{s-1} \cup \{(\frac{1}{p^j})_{k} | j = 1, \dots, (e-1) \}\] and set $X_{e,s} = X_{e, s-1} \cup \{(\frac{1}{p})_k\}$, $\tilde{X}_{e, s} = \tilde{X}_{e,s-1}$, $T_{e,s} = T_{e,s-1}$, and $Y_{e,s} = Y_{e,s-1}$. If such a $y$ is found, on the other hand, the action will be to give all existing element of $X_{e,s-1}$ height greater than
$e$. To do this, collect \[K = \{k | (\frac{1}{p})_k \in X_{e, s-1} \setminus \tilde{X}_{e,s-1}\}\] and the least positive $r \notin T_{e,s-1}$. Note that $K$ is finite. Set \[C_s =
C_{s-1} \cup \bigcup\limits_{k \in K} \{(\frac{1}{p^j})_{k} | j = (e, \dots, e+r+1)\}\] and set $T_{e, s} = T_{e, s-1}
\cup \{r\}$, $\tilde{X}_{e, s} = \tilde{X}_{e, s-1} \cup \{(\frac{1}{p})_k | k \in K \}$, $X_{e, s} = X_{e, s-1}$, and $Y_{e,s} = Y_{e,s-1} \cup \{y\}$.
To satisfy $Q_e$ at stage $s$ we will look to see whether the elements of $\xi_e = (a, b)$ are in $C_{s-1}$. If they are both there, set $C_{s} = C_{s-1} \cup \{a+b\}$. Otherwise, set $C_s = C_{s-1}$.
To satisfy $Z_e$, we will act at stage $s$ by first looking for the parameters in $\phi_e$ in $C_{s-1}$. If all of them are there and $G^{\omega, \infty} \models \phi_e$, then set $D_s = D_{s-1} \cup \{\phi_e\}$. If all of them are there and $G^{\omega, \infty} \models \lnot \phi_e$, then set $D_s = D_{s-1} \cup \{\lnot \phi_e\}$. If some of the parameters are not in $C_{s-1}$, we set $D_s = D_{s-1}$.
Now if $n \in S$, for each $e$ we have $Q_e$ to guarantee that $u_e(H^n)$ will be infinite, so $H^n \simeq G^\omega$. If $n \notin S$, there is some $e$ such that $Q_e$ guarantees that $u_e(H^n)$ is finite, so $H^n \not\simeq G^\omega$. \end{proof}
Since this result is perfectly uniform, we can use it for induction. What we actually have established is the following: \begin{prop} If $S$ is a set which is $\Pi^0_3$ relative to $X$, then there is a uniformly $X$-computable sequence of reduced Abelian $p$-groups $(H^n)_{n \in \omega}$, each of length at most $\omega$, such that $H^n \simeq G^{\omega}$ if and only if $n \in S$.\end{prop} There is a result of Khisamiev \cite{khis}, which allows us to transfer these \\ $X$-computable groups down to the computable level. \begin{prop}[Khisamiev] If $G$ is a $X^{\prime \prime}$-computable reduced Abelian \\ $p$-group, then there is an $X$-computable reduced Abelian $p$-group $H$ such that $H_{\omega} \simeq G$ and $u_n(H) = \omega$ for all $n \in \omega$. Moreover, from an index for $G$, we can effectively compute an index for $H$.\end{prop} These two results together can be used to establish
\begin{prop} \label{finite} If $K_{\omega \cdot m}$ is the class of computable reduced Abelian $p$-groups of length at most $\omega \cdot m$, then $E(K_{\omega \cdot m})$ is $\Pi^0_{2m+1}$ complete within $K$.\end{prop}
\begin{proof} Let $S$ be an arbitrary $\Pi^0_{2m+1}$ set. Since $S$ is $\Pi^0_3$ in $\emptyset^{(2m-1)}$, we have a uniformly $\emptyset^{(2m+1)}$-computable sequence of reduced Abelian $p$-groups $(H^n)_{n \in \omega}$, each of length at most $\omega$, such that $H^n \simeq G^{\omega}$ if and only if $n \in S$. Now we can step each $H^n$ down to a lower level using Khisamiev's result, so that we have a uniformly $\emptyset^{(2n-3)} = \emptyset^{(2(n-1)-1)}$-computable sequence $(H^{2,n})_{n \in \omega}$ of reduced Abelian $p$-groups, each of height $\omega \cdot 2$ which again have the property that $H^{2,n}$ has a constantly infinite Ulm sequence if and only if $n \in S$. By induction, we define $(H^{i,n})_{n \in \omega}$, and when we get to $(H^{m,n})_{n \in \omega}$, it will be a uniformly computable sequence of groups of length at most $\omega \cdot m$ such that $H^{m,n}$ has constantly infinite Ulm sequence if and only if $n \in S$. \end{proof}
\section{Completeness for Higher Bounds on Length} Giving completeness results for higher levels requires more elaborate machinery. We will prove a more general result using an $\alpha$-system, in the sense of Ash. These systems are explained in detail, along with several other variants, in the book of Ash and Knight \cite{akbook}. The ``metatheorem" for $\alpha$-systems was proved in a paper by Ash \cite{ashlab}.
Roughly speaking, an $\alpha$-system describes all possible priority constructions of a given kind, and the metatheorem states that given an ``instruction function" which is $\Delta^0_\alpha$, the system will produce a c.e.\ set (in our case, the diagram of a group) which incorporates the information given in the instruction function. More formally, we make the following definition:
\begin{dfn} [Ash] Let $\alpha$ be a computable ordinal. An $\alpha$-system is a structure $$(L, U, P, \hat{\ell}, E, (\leq_\beta)_{\beta < \alpha})$$ where $L$ and $U$ are c.e.\ sets, $E$ is a partial computable function on $L$ (it will eventually enumerate the diagram of the structure we are building), $P$ is a c.e.\ alternating tree on $L$ and $U$ (that is, a set of strings with letters alternating between $L$ and $U$) in which all members start with $\hat{\ell} \in L$, and $\leq_\beta$ are uniformly c.e.\ binary relations on $L$, where the following properties are satisfied: \begin{enumerate} \item $\leq_\beta$ is reflexive and transitive for all $\beta < \alpha$ \item $a \leq_{\gamma} b \Rightarrow a \leq_{\beta} b$ for all $\beta < \gamma < \alpha$ \item If $a \leq_0 b$, then $E(a) \subseteq E(b)$ \item If $\sigma u \in P$, where $\sigma$ ends in $\ell^0$, and $$\ell^0 \leq_{\beta_0} \ell^1 \leq_{\beta_1} \dots \leq_{\beta_{k-1}} \ell^k$$ where $\beta_0 > \beta_1 > \dots > \beta_k$, then there exists some $\ell^*$ such that $\sigma u \ell^* \in P$ and for all $i \leq k$, we have $\ell^i \leq_{\beta_i} \ell^*$. \end{enumerate}\end{dfn}
If we have such a system, we say that an \textit{instruction function} for $P$ is a function $q$ from the set of sequences in $P$ of odd length (i.e.\ those with a last term in $L$) to $U$, so that for any $\sigma$ in the domain of $q$, $\sigma q(\sigma) \in P$. The following theorem, due to Ash \cite{ashlab}, guarantees that if we have such a function, there is a string which represents ``carrying out" the instructions while enumerating a c.e.\ set. We call an infinite string $\pi = \hat{\ell}u_1 \ell_1 u_2 \ell_2 \dots$ a ``run" of $(P, q)$ if it is a path through $P$ with the property that for any initial segment $\sigma u$ we have $u = q(\sigma)$. The metatheorem also guarantees that there is a run with the property that $\bigcup\limits_{i \in \omega} E(\ell_i)$ is computably enumerable.
\begin{prop} [Ash Metatheorem] If we have an $\alpha$-system $$(L, U, P, \hat{\ell}, E, (\leq_{\beta})_{\beta < \alpha})$$ and if $q$ is a $\Delta^0_\alpha$ instruction function for $P$, then there is a run $\pi: \omega \to (L \cup U)$ of $(P,q)$ such that $\bigcup\limits_{i \in \omega} E(\pi(2i))$ is c.e. Further, from computable indices for the components of the system and a $\Delta^0_\alpha$ index for $q$, we can effectively determine a c.e.\ index for $\bigcup\limits_{i \in \omega} E(\pi(2i))$.\end{prop}
What this means is that if we can set up an appropriate system, then given some highly undecidable requirements, we can build a computable group to satisfy them. The difficulty (aside from digesting the metatheorem itself) mainly consists of defining the right system. Afterwards, it is no trouble to write out the high-level requirements we want to meet. Using such a system, we will prove the following generalization of Proposition \ref{finite}.
\begin{thm} Let $\alpha$ be a computable limit ordinal, and let $\hat{\alpha} = \sup\limits_{\omega \cdot \gamma < \alpha} (2 \gamma + 3)$, as in Proposition \ref{ebnd}. If $K_{\alpha}$ is the class of reduced Abelian $p$-groups of length at most $\alpha$ then $E(K_{\alpha})$ is $\Pi^0_{\hat{\alpha}}$ complete within $K_\alpha$.\end{thm}
\begin{proof} Let $(\alpha_i)_{i \in \omega \setminus \{0\}}$ be a sequence cofinal in $\alpha$ (for instance, if $\alpha = \omega \cdot \omega$, then $\alpha_i = \omega \cdot i$ would do, or if $\alpha = \omega \cdot (\beta +1)$, we could use $\alpha_i = \omega \cdot \beta + i$). Consider the family of groups $(\hat{G}^i)_{i \in \omega}$, each of length $\alpha$ where $\hat{G}_0$ has uniformly infinite Ulm sequence and $$u_\beta (\hat{G}^i) = \left\{\begin{array}{ll} \omega &
\mbox{if $\beta < \alpha_i$ or if $\beta$ is even}\\ 0 & \mbox{otherwise}\\
\end{array} \right.$$ Since the Ulm sequences of these groups are uniformly computable, there is a uniformly computable sequence $(G^i)_{i \in \omega}$ such that $G^i \simeq \hat{G}^i$ for all $i$, and such that in each of these groups, for any $\beta$, the predicate ``$x$ has height $\beta$" is computable. The proof of this, which is due to Oates, is a modification of an argument of L.\ Rogers \cite{lrogers}, and may be found in Barker's paper \cite{barker}.
For any set $S \in \Pi^0_{\hat{\alpha}}$, we will construct a sequence of groups $(H^n)_{n \in \omega}$ such that if $n \in S$ then $H^n \simeq G^0$, and otherwise, $H^n
\simeq G^i$ for some $i \neq 0$. To do this, we will define an $\hat{\alpha}$-system. Let $L$ be the set of pairs $(j, p)$, where $j \in \omega$ and $p$ is a finite injective partial function from $\omega$ to $G^j$. Let $U$ be the set $\{0,1\}$. By $E(j, p)$, we will mean the first $|dom(p)|$ atomic or negation atomic sentences with parameters from the image of $p$ which are true in $G^j$. Let $\hat{\ell} = (0, \emptyset)$, and $P$ be the set of strings of the form $\hat{\ell} u_1 \ell_1 u_2 \ell_2 \dots$ which satisfy the following properties: \begin{enumerate} \item $u_i \in U$ and $\ell_i \in L$ \item If $u_i = 1$ then $u_{i+1} = 1$ \item If $\ell_i = (j_i , p_i)$, then both the domain and range of $p_i$ contain at least the first $i$ members of $\omega$ \item If $\ell_i = (j, p)$ and $u_i = 1$, then $j \neq 0$. Otherwise, $j=0$. Further, if $u_{i-1} = 1$ and $\ell_{i-1} = (j_{i-1}, q)$, then $j = j_{i-1}$. \end{enumerate}
For the $\leq_{\beta}$ we will modify the standard back-and-forth relations on Abelian $p$-groups. In general, the standard back-and-forth relations on a class $K$ are characterized as relations on pairs $(\mathcal{A}, \overline{a})$ where $\mathcal{A} \in K$ and $\overline{a}$ is a finite tuple of $\mathcal{A}$.
\begin{dfn} If $\overline{a} \subseteq \mathcal{A}$ and $\overline{b} \subseteq \mathcal{B}$ are finite tuples of equal length, then we define the \emph{standard back-and-forth relations} $\leq_\beta$ as follows: \begin{enumerate} \item $(\mathcal{A}, \overline{a}) \leq_1 (\mathcal{B}, \overline{b})$ if and only if for all finitary $\Sigma^0_1$ formulas true of $\overline{b}$ in $\mathcal{B}$ are true of $\overline{a}$ in $\mathcal{A}$. \item $(\mathcal{A}, \overline{a}) \leq_\beta (\mathcal{B}, \overline{b})$ if and only if for any finite $\overline{d} \subset \mathcal{B}$ and any $\gamma$ with $1 \leq \gamma < \beta$ there is some $\overline{c} \subset \mathcal{A}$ of equal length such that $(\mathcal{B}, \overline{b}, \overline{d}) \leq_\gamma (\mathcal{A}, \overline{a}, \overline{c})$. \end{enumerate} \end{dfn}
This definition extends naturally to tuples of different length as follows: we say that $(\mathcal{A},\overline{a}) \leq_\beta (\mathcal{B}, \overline{b})$ if and only if $\overline{a}$ is no longer than $\overline{b}$ and that for the initial segment $\overline{b}^\prime \subset \overline{b}$ of length equal to that of $\overline{a}$, we have $(\mathcal{A}, \overline{a}) \leq_\beta (\mathcal{B}, \overline{b}^\prime)$. Barker \cite{barker} gave a useful characterization of these relations in the case of Abelian $p$-groups $\mathcal{A}$ and $\mathcal{B}$, where $\mathcal{A} = \mathcal{B}$.
\begin{prop}[Barker] If $\leq_{\beta}$ are the standard back-and-forth relations on reduced Abelian $p$-groups, and if $\overline{a}$ and $\overline{b}$ are finite subsets of equal length in an Abelian $p$-group with the height of elements given by $h$ respectively and with equal cardinality, with a function $f$ mapping elements of $\bar{b}$ to corresponding elements of $\bar{a}$, then the following hold: \begin{enumerate} \item $\overline{a} \leq_{2 \cdot \delta} \overline{b}$ if and only if the two generate isomorphic subgroups and for every $b \in \overline{b}$ and $a = f(b)$ we have $$h(a) = h(b) < \omega \cdot \delta \mbox{ or } h(b), h(a) \geq \omega \cdot \delta$$ \item $\overline{a} \leq_{2 \cdot \delta + 1} \overline{b}$ if and only if the two generate isomorphic subgroups and for every $b \in \overline{b}$ and $a = f(b)$ we have \begin{enumerate} \item In the case that $P_{\omega \cdot \delta + k}$ is infinite for every $k \in \omega$, $$h(a) = h(b) < \omega \cdot \delta$$ or $$h (b) \geq \omega \cdot \delta \mbox{ and } h (a) \geq \min \{h (b), \omega \cdot \delta + \omega\}$$ \item In the case that $P_{\omega \cdot \delta + k}$ is infinite and $P_{\omega \cdot \delta + k+1}$ is finite, $$h(a) = h(b) < \omega \cdot \delta$$ or $$\omega \cdot \delta \leq h(b) \leq h (a) \leq \omega \cdot \delta + k$$ or $$h(a) = h(b) > \omega \cdot \delta + k$$ \item In the case that $P_{\omega \cdot \delta}$ is finite, $$h(x) = h(x)$$ \end{enumerate} \end{enumerate} \end{prop}
Since in all groups with which we are concerned, $P_{\omega \cdot \delta + k}$ will be infinite for all $\delta < \alpha$, we will have no need for the more complicated cases. Also, it is helpful to deal with groups which satisfy the stronger condition that they have infinite Ulm invariants at each limit level.
\begin{dfn} Let $\mathcal{A}, \mathcal{B}$ be countable reduced Abelian $p$-groups of length at most
$\alpha$ such that for any limit ordinal $\nu < \alpha$ we have $u_{\nu}(\mathcal{A}) = u_{\nu}(\mathcal{B}) = \omega$. Let the height of an element in its respective group be given by $h$. Let $\overline{a}, \overline{b}$ be finite sequences of equal length from $\mathcal{A}$ and $\mathcal{B}$, respectively. Then define $(\leq_{\delta})_{\delta < \omega_1}$ by the following: \begin{enumerate} \item $(\mathcal{A}, \overline{a}) \leq_{2 \cdot \delta} (\mathcal{B}, \overline{b})$ if and only if \begin{enumerate} \item The function matching elements of $\overline{a}$ to corresponding elements of $\overline{b}$ extends to an isomorphism $f: <\overline{b}> \to <\overline{a}>$, \item for every $b \in \overline{b}$ and $a = f(b)$ we have $$h(a) = h(b) < \omega \cdot \delta \mbox{ or } h(b), h(a) \geq \omega \cdot \delta$$ and \item for all $\beta < \omega \cdot \delta$ we have $u_\beta (\mathcal{A}) = u_{\beta} (\mathcal{B})$. \end{enumerate} \item $(\mathcal{A}, \overline{a}) \leq_{2 \cdot \delta + 1} (\mathcal{B}, \overline{b})$ if and only if \begin{enumerate} \item The function matching respective elements in $\overline{a}$ and $\overline{b}$ extends to an isomorphism $f: <\overline{b}> \to <\overline{a}>$, \item for every $b \in \overline{b}$ and $a = f(b)$ we have $$h(a) = h(b) < \omega \cdot \delta$$ or $$h (b) \geq \omega \cdot \delta \mbox{ and } h (a) \geq \min \{h (b), \omega \cdot \delta + \omega\}$$ \item for all $\beta < \omega \cdot \delta$ we have $u_\beta (\mathcal{A}) = u_{\beta} (\mathcal{B})$. \item for all $\beta \in [\omega \cdot \delta, \omega \cdot \delta + \omega)$ we have $u_\beta (\mathcal{A}) \geq u_{\beta} (\mathcal{B})$. \end{enumerate} \end{enumerate} \end{dfn}
In order to verify that we have an $\hat{\alpha}$-system, the following lemma will be important.
\begin{lem} \label{baf} Suppose $(\mathcal{A}, \overline{a}) \leq_{\beta} (\mathcal{B}, \overline{b})$. Then for any $\eta < \beta$ and for any finite sequence $\overline{d} \subseteq \mathcal{B}$ there exists a sequence $\overline{c} \subseteq \mathcal{A}$ of equal length such that $(\mathcal{B}, \overline{b}, \overline{d}) \leq_{\eta} (\mathcal{A}, \overline{a}, \overline{c})$.\end{lem}
\begin{proof} Suppose that the conditions stated for $\leq_{2 \cdot \delta}$ hold. Now suppose $\delta~=~\gamma~+~1$. It suffices to show that for all finite sequences $\overline{d} \subseteq \mathcal{B}$ there exists a sequence $\overline{c} \subseteq \mathcal{A}$ of equal length such that $(\mathcal{B}, \overline{b}, \overline{d}) \leq_{2 \cdot \delta + 1} (\mathcal{A}, \overline{a}, \overline{c})$. We will extend $f$ to $\overline{d}$ one element at a time. Let $d \in \overline{d}$, and suppose that $d \notin <\overline{b}>$ (since if it were in that subgroup, we could simply map it to the corresponding element of $<\overline{c}>$. Further suppose, without loss of generality, that $pd \in <\overline{b}>$ and that $h(d) \geq h(d + s)$ for any $s \in <\overline{b}>$. This last condition is often stated ``$d$ is proper with respect to $<\overline{b}>$." These assumptions are reasonable, since if we need to extend $f$ to an element farther afield, we can go one element at a time and work down to it. From this point, we essentially follow Kaplansky's proof of Ulm's theorem \cite{kaplansky} to find the appropriate match for $d$. Use $z$ to denote $f(pd)$. It now suffices to find some $c$ of height $h(d)$ which is proper with respect to $<\overline{a}>$ and such that $pc = z$.
First suppose that $h(z) = h(d) + 1$. Now both $z$ and $pd$ must be nonzero. For $c$ we may choose any element of $(\mathcal{A})_{h(d)}$ with $pc = z$. The height of $z$ tells us that there must exist such an element. We first check that $h(c) \leq h(d)$, which is easy, since if $h(c) > h(d)$, we would have \[h(z) = h(pc) \geq h(c)+1 \gneq h(d) + 1\] Finally, it is necessary to show that $c$ is proper with respect to $<\overline{a}>$. Suppose that $c \in <\overline{a}>$. Then $c = f(y)$ for some $y \in <\overline{b}>$. Then $pd = py$ and $d-y \notin <\overline{b}>$ to avoid $d \in <\overline{b}>$. Further, $h(d-y) = h(d)$, since $h(y) = h(d)$ and $d$ is proper with respect to $<\overline{b}>$. However, \[h(p(x-y)) = h(0) = \infty \gneq h(d) + 1\] contradicting the maximality of $h(px)$. Thus $c \notin <\overline{a}>$. Now suppose we have $h~(c~+~t)~\geq~h(d)~+~1$ for some $r \in <\overline{a}>$ with $r = f(s)$. Since $c+r \neq 0$ (to avoid the case that $c =-r \in <\overline{a}>$), we know that $h(p(w+r)) \geq h(d) + 2$, so that $h(p(d+s)) \geq h(d) + 2$. Since $h(r) \geq h(d)$, we also have $h(s) \geq h(d)$, so $h(d+s) = h(d)$, contradicting the maximality of $h(pd)$.
Suppose that $h(z) > h(d) + 1$. Now there is some $v \in (\mathcal{B})_{h(d) + 1}$ such that $pd = pv$. Then the element $d-v$ is in $P_{h(d)}(\mathcal{B})$, has height $h(d)$, and is thus proper with respect to $<\overline{b}>$. I make the following claim.
\begin{claim} [Lemma 13 of \cite{kaplansky}] Let the function $$r: (<\overline{b}>_{h(d)} \cap p^{-1} (\mathcal{B})_{h(d+2)}) \to P_{h(d)}(\mathcal{B})$$
be defined as follows: For any $x \in (<\overline{b}>_{h(d)} \cap p^{-1} (\mathcal{B})_{h(d)+2})$ there exists some $y \in (\mathcal{B})_{h(d)+1}$ such that $py=px$. Define $Y$ by $Y: x \mapsto x-y$ and let $\hat{Y}$ be the composition of this map with the projection onto $P_{h(d)}(\mathcal{B}) / P_{h(d)+1}(\mathcal{B})$. If $$F : (<\overline{b}>_{h(d)} \cap p^{-1} (\mathcal{B})_{h(d) + 2})/ <\overline{b}>_{h(d) + 1} \longrightarrow P_{h(d)}(\mathcal{B}) / P_{h(d)+1}(\mathcal{B})$$ is the map induced by $\hat{Y}$ on the quotient, then the following are equivalent: \begin{enumerate} \item The range of $F$ is not all of $P_{h(d)}(\mathcal{B}) / P_{h(d)+1}(\mathcal{B})$. \item There exists in $P_{h(d)}(\mathcal{B})$ an element of height $h(d)$ which is proper with respect to $<\overline{b}>$. \end{enumerate} \end{claim}
\begin{proof} To show 2 $\rightarrow$ 1, suppose $w \in P_{h(d)}$ has height $h(d)$ and is proper with respect to $<\overline{b}>$. Then the coset of $w$ is not in the range of $F$. Otherwise, $w = x-y+q$ for some $x \in <\overline{b}>$, some $y \in (\mathcal{B})_{h(d)}$, and some $q \in P_{h(d)+1}(\mathcal{B})$. But then $h(w-x)>h(d)$, so $w$ was not proper.
To show the other implication, suppose that $w$ is an element of $P_{h(d)}(\mathcal{B})$ representing a coset not in the range of $F$. Then $h(w) = h(d)$. Further, $w$ is proper, since if it were not, and if $h(s-w)> h(d)$ witnessed this, we could write $s-w = p \zeta$ with $\zeta \in (\mathcal{B})_{h(d)}$. But then $ps = p \zeta$ since $pw = 0$. But then $F$ will map $s$ to the coset of $v$, giving a contradiction. \end{proof}
Now since $d-v$ is such an element as is described in the second condition of the claim, we know that the range of $F$ is not all of $P_{h(d)}(\mathcal{B}) / P_{h(d)+1}(\mathcal{B})$. Since the vector spaces are finite (and thus finite dimensional), we know that the dimension of $(<\overline{b}>_{h(d)} \cap p^{-1} (\mathcal{B})_{h(d) + 2})/ <\overline{b}>_{h(d) + 1}$ is less that $u_{h(d)}(\mathcal{B})$. However, since $f$ was height preserving, it maps \[\begin{array}{c} (<\overline{b}>_{h(d)} \cap p^{-1} (\mathcal{B})_{h(d) + 2})/ <\overline{b}>_{h(d) + 1} \\ \downarrow \mbox{onto} \\ (<\overline{a}>_{h(d)} \cap p^{-1} (\mathcal{A})_{h(d) + 2})/ <\overline{a}>_{h(d) + 1}\\ \end{array}\] Thus the dimension of $(<\overline{a}>_{h(d)} \cap p^{-1} (\mathcal{A})_{h(d) + 2})/ <\overline{a}>_{h(d) + 1}$ is less than $u_{h(d)}(\mathcal{B})$.
In the case that $h(d)<\omega \cdot \delta + \omega$, we now know that the dimension of \[(~<~\overline{a}~>_{h(d)}~\cap~p^{-1}~(\mathcal{A})_{h(d) + 2})/ <\overline{a}>_{h(d) + 1}\] is less than $u_{h(d)}(\mathcal{A})$, so there is an element $c_1$ in $\mathcal{A}$ such that $pc_1 = 0$, $h(pc_1) = h(d)$, and which is proper with respect to $<\overline{a}>$. Since $h(z) > h(d) + 1$, we may write $z = pc_2$ where $c_2 \in (\mathcal{B})_{h(d)+1}$. Now we write $c = c_1 + c_2$ and note that $pc = z$, that $h(c) = h(d)$, and finally that $c$ is proper with respect to $<\overline{a}>$.
If $h(d) \geq \omega \cdot \delta + \omega$, we need considerably less. In particular, it suffices to find some $c$ such that $pc=z$, such that $c$ is proper with respect to $<\overline{a}>$, and such that $h(c) = \omega \cdot \delta + \omega$. This can be achieved by replacing $h(d)$ with $\omega \cdot \delta + \omega$ in the preceding argument, and noting that since $\omega \cdot \delta$ is a limit, $u_{\omega \cdot \delta} = \omega$. This completes the proof for the case $(\mathcal{A}, \overline{a}) \leq_{2 \cdot \delta} (\mathcal{B}, \overline{b})$ with $\delta$ a successor.
If $\delta$ is a limit ordinal, it suffices to consider some odd successor ordinal $2 \cdot \eta + 1 < 2 \cdot \delta$ and to show that for any $\overline{d} \in \mathcal{B}$ there is some $\overline{c} \in \mathcal{A}$ such that $(\mathcal{B}, \overline{b}, \overline{d}) \leq_{2 \cdot \eta + 1} (\mathcal{A}, \overline{a}, \overline{c})$. Then the proof is exactly as in the successor case.
In the case that we start with $(\mathcal{A}, \overline{a}) \leq_{2 \cdot \delta + 1} (\mathcal{B}, \overline{b})$ , we need to show that for any $\overline{d} \in \mathcal{B}$ there is some $\overline{c} \in \mathcal{A}$ such that $(\mathcal{B}, \overline{b}, \overline{d}) \leq_{2 \cdot \delta} (\mathcal{A}, \overline{a}, \overline{c})$. Now we can follow the proof exactly as in the even successor case, except that we replace $\omega \cdot \delta + \omega$ with $\omega \cdot \delta$. \end{proof}
We now adapt the relations $\leq_\beta$ on pairs $(\mathcal{A}, \bar{a}), (\mathcal{B}, \bar{b})$ to relations on $L$.
\begin{dfn} We say that $(j_1, p_1) \leq_\beta (j_2, p_2)$ if and only if \[(G^{j_1}, ran(p_1)) \leq_\beta (G^{j_2}, ran(p_2))\]\end{dfn}
We need to verify that $(L, U, P, \hat{\ell}, E, (\leq_{\beta})_{\beta<\hat{\alpha}})$ is an $\hat{\alpha}$-system. For the necessary effectiveness, notice that we need only consider $\leq_\beta$ on members of $L$, so only the groups $G^i$ are considered. Conditions 1 -- 3 are clear, as is the fact that $(\leq_{\beta})_{\beta < \hat{\alpha}}$ is uniformly c.e. It remains to verify the following:
\begin{lem} If $\sigma u \in P$ where $\sigma$ ends in $\ell^0$ and $$\ell^0 \leq_{\beta_0} \ell^1 \leq_{\beta_1} \dots \leq_{\beta_{k-1}} \ell^k$$ where $\beta_0 > \beta_1 > \dots > \beta_k$, then there exists some $\ell^*$ such that $\sigma u \ell^* \in P$ and for all $i \leq k$, we have $\ell^i \leq_{\beta_0} \ell^*$. \end{lem}
\begin{proof} We write $\ell^i = (j_i, p_i)$. By Lemma \ref{baf}, given $\ell^{k-1} \leq_{\beta_{k-1}} \ell^k$ we can produce an $\tilde{\ell}^{k-1} = (\tilde{j}_{k-1}, \tilde{p}_{k-1})$ such that $\tilde{p}$ extends $p_{k-1}$ (mapping into the same structure) and $\ell^k \leq_{\beta_k} \tilde{\ell}^{k-1}$. Similarly, for each $i$, produce $\tilde{\ell}^i$ such that $\ell^{i+1} \leq_{\beta_{i+1}} \tilde{\ell}^i$. It will then be the case that for all $i$, $\ell^i \leq_{\beta_i} \tilde{\ell}^0$. If $u = 0$ or if $1$ occurs somewhere in $\sigma$, let $\ell^* = (\tilde{j}_0, p^*)$, where $p^*$ extends $\tilde{p}_0$ and its domain and range each contain the first $n$ constants, where $2n+1$ is the length of $\sigma$. Now $\sigma u \ell^* \in P$ and for all $i$, $\ell^i \leq_{\beta_0} \ell^*$.
If, on the other hand, $u = 1$ and $1$ does not occur in $\sigma$, then we may be sure that $\tilde{j}_0 = 0$. In this case, find some $j^*>0$ such that $\alpha_{j^*} > \beta_0$. Note that since for each $\beta < \alpha_{j^*}$ we have $u_\beta (G^{j^*}) = u_\beta (G^0)$, it follows that $(G^{j^*}, \emptyset) \leq_{\beta_0 + 1} (G^0, \emptyset)$. Thus, by Lemma \ref{baf}, we have some sequence $ran(p^*) \subseteq G^{j^*}$ such that $(G^0, ran(\tilde{p})) \leq_{\beta_0} (G^{j^*}, ran(p^*))$ and having length $n$ where $2n+1$ is the length of $\sigma$. We define $p^*$ to be the function taking each of an initial sequence of the natural numbers to the corresponding element of that sequence. Then clearly $\sigma u \ell^* \in P$, and for any $i$, we have $\ell^i \leq_{\beta_0} (G^0, ran(\tilde{p})) \leq_{\beta_0} (G^{j^*}, ran(p^*))$ \end{proof}
Now let $S$ be an arbitrary $\Pi^0_{\hat{\alpha}}$ set. There is a $\Delta^0_{\hat{\alpha}}$ function $g(n, s): \omega^2 \to 2$ such that for all $n$, we have $n \in S$ if and only if $\forall s [g(n, s) = 0]$, and such that for all $n, s \in \omega$, if $g(n, s) = 1$ then $g(n, s+1) = 1$. We define a $\Delta^0_{\hat{\alpha}}$ instruction function $q_n$ as follows. If $\sigma \in P$ and $\sigma$ is of length $m$, then we define $q_n(\sigma) = g(n, m)$.
Now we certainly can find computable indices for all the components of the $\hat{\alpha}$-system, and we can uniformly find a $\Delta^0_{\hat{\alpha}}$ index for each $q_n$, so the Ash metatheorem gives us (uniformly in $n$), a run $\pi_n$ of $(P, q_n)$ and the index for the c.e.\ set $\bigcup\limits_{i \in \omega} E(\pi_n(2i))$. Let $H^n$ denote the group whose diagram this is. Note that if $n \in S$, then $q_n (m) = 0$ for all $m$, and so $H^n \simeq G^0$. Otherwise there is some $\hat{m}$ such that for all $m > \hat{m}$, we have $q_n(m) = 1$, and so $H^n \simeq G^i$ for some $i \neq 0$.\end{proof}
\end{document} | arXiv |
\begin{document}
\title{Bohr type inequality for {C}es\'{a}ro and {B}ernardi integral operator on simply connected domain} \author{Vasudevarao Allu} \address{Vasudevarao Allu, School of Basic Sciences , Indian Institute of Technology Bhubaneswar, Bhubaneswar-752050, Odisha, India.} \email{[email protected]}
\author{Nirupam Ghosh} \address{Nirupam Ghosh, Stat-Math Unit, Indian Statistical Institute, Bnagalore, Bangalore- 560059, Karnataka, India.} \email{[email protected]}
\subjclass[2010]{Primary 30C45, 30C50, 40G05} \keywords{Analytic functions, Bohr radius, {C}es\'{a}ro operator, {B}ernardi integral}
\def\@arabic\c@footnote{} \footnotetext{ {\tiny File:~\jobname.tex, printed: \number\year-\number\month-\number\day,
\thehours.\ifnum\theminutes<10{0}\fi\theminutes } } \makeatletter\def\@arabic\c@footnote{\@arabic\c@footnote}\makeatother
\begin{abstract} In this article, we study the Bohr type inequality for {C}es\'{a}ro operator and {B}ernardi integral operator acting on the space of analytic functions defined on a simply connected domain containing the unit disk $\mathbb{D}$. \end{abstract}
\maketitle \pagestyle{myheadings} \markboth{ Vasudevarao Allu and Nirupam Ghosh }{Bohr type inequality of {C}es\'{a}ro and {B}ernardi integral operator on simply connected domain}
\section{Introduction}
Let $D(a; r)= \{z \in \mathbb{C}: |z - a| < r\}$ and $\mathbb{D} = D (0; 1)$ be the unit disk in the complex plane $\mathbb{C}$. For a simply connected domain $\Omega$ containing $\mathbb{D}$, let $\mathcal{H}(\Omega)$ denote the class of analytic functions on $\Omega$, and let $$\mathcal{B}(\Omega)= \{f\in \mathcal{H}(\Omega) : f(\Omega) \subset \overline{\mathbb{D}} \}.$$ The Bohr radius \cite{Fournier-Ruscheweyh-2010} for the family $\mathcal{B}(\Omega)$ is defined to be the positive real number $R_{\Omega} \in (0, 1)$ given by \begin{equation*} R_{\Omega} = \sup \{r \in (0, 1): M_f(r) \leq 1 ~~~~ \mbox{for}~~~ f(z) = \sum_{n = 0}^{\infty} a_n z^n \in \mathcal{B}(\Omega), z\in \mathbb{D}\}, \end{equation*}
where $ M_f(r) = \sum_{n = 0}^{\infty} |a_n |r^n $ with $|z| = r $ is the majorant series associated with $f \in \mathcal{B}(\Omega)$ in $\mathbb{D}$. If $\Omega = \mathbb{D}$, then it is well-known that $R_{\mathbb{D}} = 1/3$, and it is descried precisely as follows:
\begin{customthm}{A}\label{niru-vasu-P8-theorem001} If $f \in \mathcal{B}(\mathbb{D})$, then $M_f(r) \leq 1$ for $0\leq r \leq 1/3$. The number $1/3$ is best possible. \end{customthm}
The inequality $M_f(r) \leq 1$ for $f\in \mathcal{B}(\mathbb{D})$, fails to hold for any $r> 1/3$. This can be seen by considering the function $\phi_{a}(z) = (a - z)/ (1 - a z)$ and take $a\in (0, 1)$ such that $a$ is sufficiently close to $1$. Theorem \ref{niru-vasu-P8-theorem001} was originally obtained by H. Bohr \cite{Bohr-1914} in $1914$ for $0\leq r\leq1/6$. The optimal value $1/3$, which is called the Bohr radius for the unit disk, has been established by M. Riesz, I Schur and F. W. Weiner (see \cite{Sidon-1927}, \cite{Tomic-1962}). Over the past two decades there has been significant interest on the Bohr type inequalities, one may see articles \cite{Abu-2010, abu-2011, abu-2014, Ali-2017, alkhaleefah-2019, Ismagilov-Ponnusamy-2020, Kayumov-Ponnusamy-2017, Kayumov-Ponnusamy-2019} and the references therein.\\
Besides the Bohr radius, there is a notion of Rogosinski radius \cite{landau-Gaier-1986, Rogosinski-1923} which is described as follows: Let $f(z) = \sum_{n = 0 }^{\infty}a_n z^n \in \mathcal{B}(\mathbb{D})$ and the corresponding partial sum of $f$ is defined by $S_N(z) = \sum_{n = 0 }^{N -1}a_n z^n$. Then, for every $N \geq 1$, we have
$\left|S_N(z)\right| < 1$ in the disk $|z| < 1/2$ and the radius $1/2$ is sharp. Motivated by Rogosinski radius, Kayumov and Ponnusamy \cite{Kayumov-Ponnusamy-arxiv} have considered the Bohr-Rogosinski sum as $$
R^{f}_{N} := |f(z)| + \sum_{n = N}^{\infty}|a_n||z|^n $$
for $f \in \mathcal{B}(\mathbb{D})$ and defined the Bohr-Rogosinski radius as the largest number $r> 0$ such that $R^{f}_{N} \leq 1$ for $|z|< r$. For a significant and an extensive research in the direction of Bohr-Rogosinki radius, we referred to \cite{Ismagilov-Ponnusamy-2020, Kayumov-Ponnusamy-2018, Kayumov-Ponnusamy-2021} and the references therein.
A natural question arises ``Can we extend the Bohr type inequality for certain complex integral operators defined on various function spaces?" The idea has been initiated for the classical {C}es\'{a}ro operator in \cite{Kayumov-Ponnusamy-2020, Kayumov-Ponnusamy-2021} and for {B}ernardi integral operator in \cite{Kumar-Sahoo-2021}. In \cite{Kayumov-Ponnusamy-2020,Kayumov-Ponnusamy-2021, Kumar-Sahoo-2021} the authors have studied the Bohr type and Bohr-Rogosinski type inequalities for {C}es\'{a}ro operator and {B}ernardi integral operator defined on $\mathcal{B}(\mathbb{D})$.
{C}es\'{a}ro operator and its various generalizations have been extensively studied. For example, the boundness and compactness of {C}es\'{a}ro operator on different function spaces has been well studied. In the classical setting, for an analytic function $f(z) = \sum_{n = 0}^{\infty}a_n z^n$ on the unit disk $\mathbb{D}$, the {C}es\'{a}ro operator is defined by \cite{Hardy-Littlewoow-1931} (see also \cite{Halmos-1965, Stempak-1994}) \begin{equation}\label{niru-vasu-P8-eq000a} \mathcal{C}f(z) := \sum_{n = 0}^{\infty}\frac{1}{n + 1}\left( \sum_{k = 0}^{n}a_k\right) z^n = \int_{0}^{1} \frac{f(t z)}{1 - t z}\, dt. \end{equation} It is not difficult to show that for $f \in \mathcal{B}(\mathbb{D})$, $$
\left|\mathcal{C}f(z) \right| = \left|\sum_{n = 0}^{\infty}\frac{1}{n + 1}\left( \sum_{k = 0}^{n}a_k\right) z^n \right| \leq \frac{1}{r}\ln{\frac{1}{1 - r}} \quad \mbox{for}\quad |z| = r. $$ In 2020, Kayumov {\it et al.} \cite{Kayumov-Ponnusamy-2020} have been established the following Bohr type inequality for {C}es\'{a}ro operator. \begin{customthm}{B}\label{niru-vasu-P8-theorem002} If $f \in \mathcal{B}(\mathbb{D})$ and $f(z) = \sum_{n = 0}^{\infty} a_n z^n$, then $$
\mathcal{C}_{f}(r) = \sum_{n = 0}^{\infty}\frac{1}{n + 1}\left( \sum_{k = 0}^{n}|a_k|\right) r^n \leq \frac{1}{r}\ln{\frac{1}{1 - r}} $$
for $|z| = r \leq R,$ where $R = 0.5335\ldots$ is the positive root of the equation $$ 2 x = 3(1 -x)\ln{\frac{1}{1-x}}. $$ The number $R$ is the best possible. \end{customthm}
For an analytic function $f(z) = \sum_{n = m}^{\infty}a_n z^n$ on the unit disk $\mathbb{D}$, the {B}ernardi integral operator (see \cite{Miller-Mocanu-2020}) is defined by $$ \mathcal{L}_{\beta}f(z) : = \frac{1 + \beta}{z^\beta}\int_{0}^{z} f(\xi) \xi^{\beta - 1} \, d \xi =(1 + \beta)\sum_{n = m}^{\infty}\frac{a_n}{\beta + n} z^n, $$
where $\beta > -m$ and $m \geq 0$ is an integer. It is worth to mention that for each $|z| = r \in[0, 1)$, the integral representation for {B}ernardi integral operator yields the following for $f\in \mathcal{B}(\mathbb{D})$: $$
\left|\mathcal{L}_{\beta}f(z) \right| = \left|(1 + \beta)\sum_{n = 0}^{\infty}\frac{a_n}{\beta + n} z^n \right|\leq (1 + \beta)\frac{r^m}{m +\beta}, $$ which is equivalent to the following expression $$
\left|\sum_{n = m}^{\infty}\frac{a_n}{\beta + n} z^n \right|\leq \frac{r^m}{m +\beta}. $$ Recently, Kumar and Sahoo \cite{Kumar-Sahoo-2021} have studied the following Bohr type inequality for {B}ernardi integral operator.
\begin{customthm}{C}\label{niru-vasu-P8-theorem003} Let $\beta > -m$. If $f(z) = \sum_{n = m}^{\infty} a_n z^n \in \mathcal{B}(\mathbb{D})$, then $$
\sum_{n = m}^{\infty}\frac{|a_n|}{\beta + n} |z|^n \leq \frac{r^m}{m +\beta} $$
for $|z| = r \leq R(\beta)$. Here $R(\beta)$ is the positive root of the equation $$ \frac{x^m}{m + \beta} - 2 \sum_{n = m + 1}^{\infty} \frac{x^n}{n + \beta}=0 $$ that cannot be improved. \end{customthm}
The maim aim of this paper is to find the sharp Bohr type inequality for {C}es\'{a}ro operator and {B}ernardi integral operator for functions in the class $\mathcal{B}(\Omega_{\gamma})$, where $$
\Omega_{\gamma}:= \bigg \{z \in \mathbb{C} : \left|z + \frac{\gamma}{1 - \gamma} \right| < \frac{1}{1 - \gamma}\bigg\} \quad \mbox{for} \quad 0 \leq \gamma < 1. $$ Clearly the unit disk $\mathbb{D}$ is always a subset of $\Omega_{\gamma}$. In 2010, Fournier and Ruscheweyh \cite{Fournier-Ruscheweyh-2010} extended the Bohr's inequality for functions in $\mathcal{B}(\Omega_{\gamma})$.
The following lemma by Evdoridis {\it et al.} \cite{Evdoridis-Ponnusay-Rasila-2021} plays a crucial rule to prove our main results.
\begin{lem} \label{niru-vasu-P8-lem001}\cite{Evdoridis-Ponnusay-Rasila-2021} For $\gamma\in [0, 1)$, let $$
\Omega_{\gamma}:= \bigg \{z \in \mathbb{C} : \left|z + \frac{\gamma}{1 - \gamma} \right| < \frac{1}{1 - \gamma}\bigg\}, $$ and let $f$ be an analytic function in $\Omega_{\gamma}$, bounded by $1$, with the series representation $f(z) = \sum_{n = 0}^{\infty} a_n z^n$ in $\mathbb{D}$. Then, $$
|a_n| \leq \frac{1 - |a_0|^2}{ 1 + \gamma} \quad \mbox{for} \quad n\geq 1. $$ \end{lem}
\section{Main results}
We state and prove our first main result.
\begin{thm}\label{niru-vasu-P8-theorem005} For $0 \leq \gamma < 1$, let $f \in \mathcal{B}(\Omega_{\gamma})$ with $f(z) = \sum_{n = 0}^{\infty} a_n z^n$ in $\mathbb{D}$. Then we have \begin{equation}\label{niru-vasu-P8-eq000}
\mathcal{C}_{f}(r) = \sum_{n = 0}^{\infty}\frac{1}{n + 1}\left( \sum_{k = 0}^{n}|a_k|\right) r^n \leq \frac{1}{r}\ln{\frac{1}{1 - r}}\quad \mbox{for}~~~ |z|= r \leq R_\gamma \end{equation} where $R_\gamma$ is the positive root of $$ (3 + \gamma)(1 - x)\ln{\frac{1}{1 - x}} = 2 x. $$ The number $R_\gamma$ is the best possible. \end{thm}
\begin{proof}
Let $|a_0| = a < 1$. A simple computation of the {C}es\'{a}ro operator in (\ref{niru-vasu-P8-eq000a}) shows that \begin{align}\label{niru-vasu-P8-eq001}
\mathcal{C}_{f}(r) & = \sum_{n = 0}^{\infty}\frac{1}{n + 1}\left( \sum_{k = 0}^{n}|a_k|\right) r^n \\\nonumber
& = a \left(1 + \frac{r}{2} + \frac{r^2}{3}+ \cdots\right) + \sum_{n = 1}^{\infty}\frac{1}{n + 1}\left( \sum_{k = 1}^{n}|a_k|\right) r^n\\ \nonumber
& = \frac{a}{r}\ln{\frac{1}{1 - r}} + \sum_{n = 1}^{\infty}\frac{1}{n + 1}\left( \sum_{k = 1}^{n}|a_k|\right) r^n. \end{align} Using Lemma \ref{niru-vasu-P8-lem001} in (\ref{niru-vasu-P8-eq001}) we obtain the following estimation for the {C}es\'{a}ro operator:
\begin{align}\label{niru-vasu-P8-eq005} \mathcal{C}_{f}(r) & \leq \frac{a}{r}\ln{\frac{1}{1 - r}} + \frac{1 - a^2}{1 + \gamma}\sum_{n = 1}^{\infty}\frac{n}{n + 1} r^n\\ \nonumber &= \frac{a}{r}\ln{\frac{1}{1 - r}} + \frac{1 - a^2}{1 + \gamma} \bigg( \frac{1}{1 - r} - \frac{1}{r}\ln {\frac{1}{1 - r}} \bigg). \end{align} Let $$P_{\gamma, r}(a) = \frac{a}{r}\ln{\frac{1}{1 - r}} + \frac{1 - a^2}{1 + \gamma} \bigg( \frac{1}{1 - r} - \frac{1}{r}\ln {\frac{1}{1 - r}} \bigg).$$ Then twice differentiation of $P_{\gamma, r}$ with respect to $a$ shows that \begin{equation*} P''_{\gamma, r} (a)= \frac{-2}{1 + \gamma} \left( \frac{1}{1 - r} - \frac{1}{r}\ln {\frac{1}{1 - r}} \right) \leq 0 \end{equation*} for all $a\in [0, 1)$ and for all $r \in [0, 1)$. Therefore, $P'_{\gamma, r}$ is a decreasing function and hence we obtain \begin{equation}\label{niru-vasu-P8-eq005aa} P'_{\gamma, r} (a) \geq P'_{\gamma, r} (1)= \frac{1}{r(1 - r)(1 + \gamma)}\left(-2r + (3 + \gamma)(1 - r)\ln \frac{1}{1 - r}\right) \geq 0 \end{equation} for all $r \leq R_\gamma$. Thus, $P_{\gamma, r}(a)$ is increasing for $r \leq R_\gamma$ and for all $\gamma\in [0, 1)$ and hence
\begin{equation}\label{niru-vasu-P8-eq005a} P_{\gamma, r}(a) \leq P_{\gamma, r} (1) = \frac{1}{r}\ln{\frac{1}{1 - r}} \quad \mbox{for all}\quad r \leq R_\gamma. \end{equation} Therefore, the desired inequality (\ref{niru-vasu-P8-eq000}) follows (\ref{niru-vasu-P8-eq005a}).\\[3mm]
Now we show that the radius $R_\gamma$ cannot be improved. In order to prove the sharpness of the result, we consider the function $G: \Omega_{\gamma} \rightarrow \mathbb{D} $ defined by $G(z) = (1 - \gamma) z + \gamma$ and $\psi: \mathbb{D}\rightarrow \mathbb{D}$ defined by $$ \psi(z) = \frac{a - z}{1 - a z} $$ for $a\in(0, 1)$. Then $f_\gamma = \psi \circ G$ maps $\Omega_\gamma$ univalently onto $\mathbb{D}$. A simple computation shows that \begin{equation*} f_\gamma (z) = \frac{a - \gamma -(1 - \gamma)z}{1 - a \gamma - a (1 - \gamma)z } = A_0 - \sum_{n = 1}^{\infty} A_n z^n,~~~ z\in \mathbb{D}, \end{equation*} where $a \in (0, 1)$, \begin{equation}\label{niru-vasu-P8-eq006} A_0 = \frac{a - \gamma}{1 - a \gamma}~~~ \mbox{ and}~~~ A_n = \frac{1 - a^2}{a (1 - a \gamma)} \bigg( \frac{a (1 - \gamma)}{1 - a \gamma}\bigg)^n. \end{equation} For a given $\gamma \in [0, 1)$, let $a > \gamma$. Then the Ces\'{a}ro operator on $f_\gamma$ shows that \begin{align}\label{niru-vasu-P8-eq010}
\mathcal{C}f_\gamma(r) & = \sum_{n = 0}^{\infty}\frac{1}{n + 1}\left(\sum_{k = 0}^{n} |A_k| \right) r^n\\\nonumber
& = \frac{A_0}{r}\ln\frac{1}{1 -r} + \sum_{n = 1}^{\infty} \frac{1}{n + 1} \left(\sum_{k = 1}^{n} |A_k| \right) r^n. \end{align} By substituting $A_0$ and $A_n$ for $n \geq 1$ in (\ref{niru-vasu-P8-eq010}), we obtain \begin{align}\label{niru-vasu-P8-eq015} \mathcal{C}f_\gamma(r) & = \frac{a - \gamma}{r(1 - a \gamma)}\ln \frac{1}{1 - r} + \sum_{n = 1}^{\infty} \frac{1}{n + 1} \left(\sum_{k = 1}^{n} \frac{1 - a^2}{a (1 - a \gamma)} \left(\frac{a(1 - \gamma)}{1 - a \gamma}\right)^k \right) r^n\\\nonumber & = \frac{a - \gamma}{r(1 - a \gamma)}\ln \frac{1}{1 - r} + \frac{(1 - a^2)(1 - \gamma)}{(1 - a \gamma)^2}\sum_{n = 1}^{\infty} \frac{1}{n + 1} \left(\sum_{k = 1}^{n} \left(\frac{a(1 - \gamma)}{1 - a \gamma}\right)^{k-1} \right) r^n\\\nonumber & = \frac{a - \gamma}{r(1 - a \gamma)}\ln \frac{1}{1 - r} + \frac{(1 + a )(1 - \gamma)}{(1 - a \gamma)}\sum_{n = 1}^{\infty} \frac{1}{n + 1} \left( 1 - \frac{a ^n (1 - \gamma)^n}{(1 - a \gamma)^n} \right)r^n. \end{align} Further simplification of (\ref{niru-vasu-P8-eq015}) shows that \begin{align*} \mathcal{C}f_\gamma(r) & = \frac{a - \gamma}{r(1 - a \gamma)}\ln \frac{1}{1 - r} + \frac{(1 + a )(1 - \gamma)}{r (1 - a \gamma)}\ln \frac{1}{1 - r} - \frac{1 + a}{a r}\ln \left(\frac{1}{1 - \frac{(1 - \gamma) a r}{1 - a \gamma} }\right)\\ & = \frac{1}{r}\ln\frac{1}{1 - r} + \frac{(1 - a)}{(1 - a \gamma)}\frac{2 r + (3 + \gamma)(1 - r)\ln (1 - r)}{r (1 - r)} + D_{a, \gamma}(r), \end{align*} where \begin{align*} D_{a, \gamma}(r) & = \frac{(3 - a)- \gamma(1 + a)}{1 - a \gamma} - 2 \frac{(1 - a)}{(1 - a \gamma)(1 - r)} - \frac{1 + a}{a r}\ln \left(\frac{1}{1 - \left(\frac{(1 - \gamma) a r}{1 - a \gamma} \right)}\right)\\ & = \sum_{n = 1}^{\infty}\left( \frac{(3 - a)- \gamma(1 + a)}{1 - a \gamma} - 2 \frac{(1 - a)}{(1 - a \gamma)} - \frac{a^n (1 + a )(1 - \gamma)^{n + 1}}{(1 - a \gamma)^{n + 1}}\right) r^n\\ & = O((1 - a)^2) ~~~ \mbox{as}~~~ a \rightarrow 1. \end{align*} From (\ref{niru-vasu-P8-eq005aa}), we obtain $\left(-2r + (3 + \gamma)(1 - r)\ln \frac{1}{1 - r}\right) \geq 0$ for all $r \leq R_\gamma$ and hence $$\frac{2 r + (3 + \gamma)(1 - r)\ln (1 - r)}{r (1 - r)} > 0 \quad \mbox{for} ~~~ r > R_\gamma.$$ These two facts show that the number cannot be improved.
\end{proof}
\begin{rem} Since for $\gamma = 0$, the domain $\Omega_\gamma$ reduces to the unit disk $\mathbb{D}$, Theorem \ref{niru-vasu-P8-theorem002} is a direct consequence of Theorem \ref{niru-vasu-P8-theorem005} when $\gamma = 0$. \end{rem}
In the next result we study the Bohr type inequality for {B}ernardi integral operator for the class of analytic functions defined on $\Omega_\gamma$.
\begin{thm} For $0 \leq \gamma < 1$, let $f \in \mathcal{B}(\Omega_{\gamma})$ with $f(z) = \sum_{n = 0}^{\infty} a_n z^n$ in $\mathbb{D}$. Then for $\beta > 0$ \begin{equation*}
\sum_{n = 0}^{\infty}\frac{|a_n|}{n + \beta} r^n \leq \frac{1}{\beta} ~~~ \mbox{for}~~~ r \leq R_{\gamma, \beta} \end{equation*} where $R_{\gamma, \beta}$ is the positive root of $$ \frac{1}{\beta} = \frac{2 }{1 + \gamma} \sum_{n = 1}^{\infty} \frac{r^n}{n + \beta}. $$ The number $R_{\gamma, \beta}$ is the best possible. \end{thm}
\begin{proof}
Let $|a_0| = a < 1$. Then \begin{align}\label{niru-vasu-P8-eq020}
\sum_{n = 0}^{\infty}\frac{|a_n|}{n + \beta} r^n = \frac{a}{\beta} + \sum_{n = 1}^{\infty}\frac{|a_n|}{n + \beta} r^n. \end{align} In view of Lemma \ref{niru-vasu-P8-lem001} and (\ref{niru-vasu-P8-eq020}), we obtain \begin{align*}
\sum_{n = 0}^{\infty}\frac{|a_n|}{n + \beta} r^n \leq \frac{a}{\beta} + \frac{1 - a^2}{1 + \gamma}\sum_{n = 1}^{\infty}\frac{r^n}{n + \beta}. \end{align*} Let $$\Phi_{\gamma, \beta} (a) = \frac{a}{\beta} + \frac{1 - a^2}{1 + \gamma}\sum_{n = 1}^{\infty}\frac{r^n}{n + \beta}.$$ Then twice differentiation of $\Phi_{\gamma, \beta}$ with respect to $a$ shows that $$\Phi''_{\gamma, \beta} (a) = -\frac{2}{1 + \beta}\sum_{n = 1}^{\infty}\frac{r^n}{n + \gamma} \leq 0$$ for all $a \in [0, 1]$ and for all $r \in [0, 1)$. This implies that $\Phi'_{\gamma, \beta}$ is decreasing and \begin{equation}\label{niru-vasu-P8-eq021} \Phi'_{\gamma, \beta} (a) \geq \Phi'_{\gamma, \beta} (1) = \left(\frac{1}{\beta} - \frac{2}{1 + \gamma} \sum_{n = 1}^{\infty}\frac{r^n}{(n + \beta)}\right) \geq 0 \end{equation} for $r \leq R_{\gamma, \beta}$. Hence $\Phi_{\gamma, \beta} (a)$ is increasing for $r\leq R_{\gamma, \beta} $. Therefore for all $a \in [0, 1]$, $$ \Phi_{\gamma, \beta}(a) \leq \Phi_{\gamma, \beta}(1) = \frac{1}{\beta} \quad \mbox{for}~~~ r \leq R_{\gamma, \beta} $$ and hence \begin{equation*}
\sum_{n = 0}^{\infty}\frac{|a_n|}{n + \beta} r^n \leq \frac{1}{\beta} \quad \mbox{for}~~~ r \leq R_{\gamma, \beta}. \end{equation*} We now show that $R_{\gamma, \beta}$ cannot be improved. In order to do this, consider the function \begin{equation*} f_\gamma (z) = \frac{a - \gamma -(1 - \gamma)z}{1 - a \gamma - a (1 - \gamma)z } = A_0 - \sum_{n = 1}^{\infty} A_n z^n, \quad z\in \mathbb{D}, \end{equation*} where $a \in (0, 1)$, and $A_n (n \geq 0)$ are given by (\ref{niru-vasu-P8-eq006}). For a given $\gamma \in [0, 1)$, let $a > \gamma$. Then for $\gamma\in [0, 1)$ and $\beta \geq 1$, we have \begin{align}\label{niru-vasu-P8-eq025}
\sum_{n = 0}^{\infty}\frac{|A_n|}{n + \beta} r^n & = \frac{A_0}{\beta} + \sum_{n = 1}^{\infty}\frac{|A_n|}{n + \beta} r^n \\\nonumber & = \frac{a - \gamma}{(1 - a \gamma) \beta} + \sum_{n = 1}^{\infty}\frac{1 - a^2}{a (1 - a \gamma)} \left(\frac{a(1 - \gamma)}{1 - a \gamma}\right)^n \frac{r^n}{n + \beta}. \end{align} By a simple computation, from (\ref{niru-vasu-P8-eq025}), we obtain \begin{align*}
\sum_{n = 0}^{\infty}\frac{|A_n|}{n + \beta} r^n = \frac{1 }{\beta} - (1 - a) \left( \frac{1}{\beta} - \frac{2 }{1 + \gamma} \sum_{n = 1}^{\infty} \frac{r^n}{n + \beta}\right) + M_{a, \gamma, \beta}(r), \end{align*} where \begin{align*} M_{a, \gamma, \beta}(r) & = -\frac{1}{\beta} + \frac{a - \gamma}{(1 - a \gamma)\beta}\\ & + \frac{(1 - a^2)}{a(1 - a \gamma)} \sum_{n = 1}^{\infty}\left(\frac{a(1 - \gamma)}{1 - a \gamma}\right)^n \frac{r^n}{n + \beta} + (1 - a)\left(\frac{1}{\beta} - \frac{2 }{1 + \gamma} \sum_{n = 1}^{\infty} \frac{r^n}{n + \beta}\right)\\ & = (a - 1)\left( \frac{\gamma(a + 1)}{(1 - a \gamma)\beta} + \frac{2}{1 + \gamma}\sum_{n = 1}^{\infty} \frac{r^n}{n + \beta}\right) + \frac{(1 - a^2)}{a(1 - a \gamma)} \sum_{n = 1}^{\infty}\left(\frac{a(1 - \gamma)}{1 - a \gamma}\right)^n \frac{r^n}{n + \beta}. \end{align*} Letting $a \rightarrow 1$, we obtain $$ M_{a, \gamma, \beta}(r) = O ((1 - a)^2). $$ Further from (\ref{niru-vasu-P8-eq021}), we obtain $ \left(\frac{1}{\beta} - \frac{2}{1 + \gamma} \sum_{n = 1}^{\infty}\frac{r^n}{(n + \beta)}\right) \geq 0 $ for all $r \leq R_{\gamma, \beta}$. Therefore $$ \frac{1}{\beta} - \frac{2 }{1 + \gamma} \sum_{n = 1}^{\infty} \frac{r^n}{n + \beta} < 0 \quad \mbox{for }~~~ r > R_{\gamma, \beta}. $$ These two facts show that $R_{\gamma, \beta}$ cannot be improved.
\end{proof}
\end{document} | arXiv |
Find the remainder when $x^9 - x^6 + x^3 - 1$ is divided by $x^2 + x + 1.$
We can factor $x^9 - x^6 + x^3 - 1$ as
\[x^6 (x^3 - 1) + (x^3 - 1) = (x^6 + 1)(x^3 - 1) = (x^6 + 1)(x - 1)(x^2 + x + 1).\]Thus, $x^9 - x^6 + x^3 - 1$ is a multiple of $x^2 + x + 1,$ so the remainder is $\boxed{0}.$ | Math Dataset |
Nonradial solutions for the Klein-Gordon-Maxwell equations
DCDS Home
On the fluid dynamical approximation to the nonlinear Klein-Gordon equation
June 2012, 32(6): 2253-2270. doi: 10.3934/dcds.2012.32.2253
Symmetrical symplectic capacity with applications
Chungen Liu 1, and Qi Wang 1,
School of Mathematics and LPMC, Nankai University, Tianjin 300071, China
Received January 2011 Revised October 2011 Published February 2012
In this paper, we first introduce the concept of symmetrical symplectic capacity for symmetrical symplectic manifolds, and by using this symmetrical symplectic capacity theory we prove that there exists at least one symmetric closed characteristic (brake orbit and $S$-invariant brake orbit are two examples) on prescribed symmetric energy surface which has a compact neighborhood with finite symmetrical symplectic capacity.
Keywords: Symmetrical symplectic manifolds, brake orbits., symmetrical symplectic capacity.
Mathematics Subject Classification: 53D40, 37J4.
Citation: Chungen Liu, Qi Wang. Symmetrical symplectic capacity with applications. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2253-2270. doi: 10.3934/dcds.2012.32.2253
A. Chenciner and R. Montgomery, A remarkable periodic solution of the three body problem in the case of equal masses,, Annals of Mathematics (2), 152 (2000), 881. doi: 10.2307/2661357. Google Scholar
I. Ekeland and H. Hofer, Symplectic topology and Hamiltonian dynamics,, Math. Z., 200 (1989), 355. doi: 10.1007/BF01215653. Google Scholar
I. Ekeland and H. Hofer, Symplectic topology and Hamiltonian dynamics. II,, Math. Z., 203 (1990), 553. doi: 10.1007/BF02570756. Google Scholar
M. Gromov, Pseudo-holomorphic curves in symplectic manifolds,, Invent. Math., 82 (1985), 307. doi: 10.1007/BF01388806. Google Scholar
D. Hermann, Holomorphic curves and Hamiltonian systems in an open set with restricted contact-type boundary,, Duke Mathematical Journal, 103 (2000), 335. doi: 10.1215/S0012-7094-00-10327-4. Google Scholar
M.-R. Herman, Differentiabilité optimale et contre-exemples à la fermeture en topologie $C^\infty$ des orbites recurrentes de flots Hamiltoniens,, Comptes Rendus de l'Académie des Sciences Serie-I. Mathématique, 313 (1991), 49. Google Scholar
M.-R. Herman, Exemples de flots Hamiltoniens dont aucune perturbation en topologie $C^\infty$ n'a d'orbites periodiques sur un ouvert de surfaces d'énergies,, Comptes Rendus de l'Académie des Sciences Serie-I. Mathématique, 312 (1991), 989. Google Scholar
H. Hofer, On the topolgical properties of symplectic maps,, Proc. Roy. Soc. Edinburgh, 115 (1990), 25. doi: 10.1017/S0308210500024549. Google Scholar
H. Hofer and C. Viterbo, The Weinstein conjecture in the presence of holomorpgic spheres,, Comm. Pure Appl. Math, 45 (1992), 583. doi: 10.1002/cpa.3160450504. Google Scholar
H. Hofer and E. Zehnder, A new capacity for symplectic manifolds,, in, (1990), 405. Google Scholar
H. Hofer and E. Zehnder, "Symelectic Invariants and Hamiltonian Dynamics,", Birkhäuser Advanced Texts: Basler Lehrbücher, (1994). Google Scholar
M.-Y. Jiang, Hofer-Zehnder symplectic capatcity for two dimensional manifolds,, Proc. Royal Soc. Edinb., 123 (1993), 945. doi: 10.1017/S0308210500029590. Google Scholar
C. Liu, Y. Long and C. Zhu, Multiplicity of closed characteristics on symmetric convex hypersurfaces in $\mathbbR^{2n}$,, Math. Ann., 323 (2002), 201. doi: 10.1007/s002089100257. Google Scholar
Y. Long and C. Zhu, Closed characteristics on compact convex hypersurfaces in $\mathbbR^{2n}$,, Ann. Math. (2), 155 (2002), 317. doi: 10.2307/3062120. Google Scholar
G. Lu, Finiteness of the Hofer-Zehnder capacity of neighborhoods of syplectic submanifolds,, IMRN, 2006 (7652). Google Scholar
C. Li and C. Liu, Brake subharmonic solutions of first order Hamiltonian systems,, Science in China (Mathematics), 53 (2010), 2719. doi: 10.1007/s11425-010-4105-5. Google Scholar
C. Liu, Q. Wang and X. Lin, An index theory for symplectic paths associated with Lagrangian subspaces with applications,, Nonlinearity, 24 (2011), 43. doi: 10.1088/0951-7715/24/1/002. Google Scholar
C. Liu and D. Zhang, Iteration theory of $L$-index and multiplicity of brake orbits,, preprint., (). Google Scholar
Y. Long, D. Zhang and C. Zhu, Multiple brake orbits in bounded convex symmetric domains,, Adv. Math., 203 (2006), 568. doi: 10.1016/j.aim.2005.05.005. Google Scholar
L. Macarini, Hofer-Zehnder capacity and Hamiltonian circle actions,, Communications in Contemporary Mathematics, 6 (2004), 913. doi: 10.1142/S0219199704001550. Google Scholar
D. McDuff and D. Salamon, "Introduction to Symplectic Topology,", Second edition, (1998). Google Scholar
P. H. Rabinowitz, Periodic solutions of Hamiltonian systems,, Comm. Pure Appl. Math., 31 (1978), 157. doi: 10.1002/cpa.3160310203. Google Scholar
P. H. Rabinowitz, Periodic solutions of Hamiltonian systems on a prescribed energy surface,, J. Diff. Equ., 33 (1979), 336. doi: 10.1016/0022-0396(79)90069-X. Google Scholar
P. H. Rabinowitz, On the existence of periodic solutions for a class of symmetric Hamiltonian systems,, Nonlinear Anal., 11 (1987), 599. doi: 10.1016/0362-546X(87)90075-7. Google Scholar
M. Struwe, Existence of periodic solutions of Hamiltonian systems on almost every energy surface,, Bol. Soc. Bras. Mat. (N.S.), 20 (1990), 49. Google Scholar
A. Szulkin, An index theory and existence of multiple brake orbits for star-shaped Hamiltonian systems,, Math. Ann., 283 (1989), 241. doi: 10.1007/BF01446433. Google Scholar
C. Viterbo, A proof of the Weinstein conjecture in $\mathbbR^{2n}$,, Ann. Inst. H. Poincaré, 4 (1987), 337. Google Scholar
C. Viterbo, Capacités symplectiques et applications (d'aprés Ekeland- Hofer, Gromov),, Astérisque No., 177-178 (1989), 177. Google Scholar
C. Viterbo, Functors and computations in Floer homology with applications,, I and II, 9 (1999), 985. doi: 10.1007/s000390050106. Google Scholar
A. Weinstein, Periodic orbits for convex Hamiltonian systems,, Ann. Math. (2), 108 (1978), 507. doi: 10.2307/1971185. Google Scholar
A. Weinstein, On the hypotheses of Rabinowitz's periodic orbit theorems,, J. Diff. Equ., 33 (1979), 353. doi: 10.1016/0022-0396(79)90070-6. Google Scholar
D. Zhang, Relative Morse index and multiple brake orbits of asymptotically linear Hamiltonian systems in the presence of symmetries,, J. Differential Equations, 245 (2008), 925. doi: 10.1016/j.jde.2008.04.020. Google Scholar
Martin Pinsonnault. Maximal compact tori in the Hamiltonian group of 4-dimensional symplectic manifolds. Journal of Modern Dynamics, 2008, 2 (3) : 431-455. doi: 10.3934/jmd.2008.2.431
Rafael de la Llave, Jason D. Mireles James. Parameterization of invariant manifolds by reducibility for volume preserving and symplectic maps. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4321-4360. doi: 10.3934/dcds.2012.32.4321
Tianxiao Wang, Yufeng Shi. Symmetrical solutions of backward stochastic Volterra integral equations and their applications. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 251-274. doi: 10.3934/dcdsb.2010.14.251
Santiago Cañez. Double groupoids and the symplectic category. Journal of Geometric Mechanics, 2018, 10 (2) : 217-250. doi: 10.3934/jgm.2018009
Mads R. Bisgaard. Mather theory and symplectic rigidity. Journal of Modern Dynamics, 2019, 15: 165-207. doi: 10.3934/jmd.2019018
P. Balseiro, M. de León, Juan Carlos Marrero, D. Martín de Diego. The ubiquity of the symplectic Hamiltonian equations in mechanics. Journal of Geometric Mechanics, 2009, 1 (1) : 1-34. doi: 10.3934/jgm.2009.1.1
Björn Gebhard. A note concerning a property of symplectic matrices. Communications on Pure & Applied Analysis, 2018, 17 (5) : 2135-2137. doi: 10.3934/cpaa.2018101
Joshua Cape, Hans-Christian Herbig, Christopher Seaton. Symplectic reduction at zero angular momentum. Journal of Geometric Mechanics, 2016, 8 (1) : 13-34. doi: 10.3934/jgm.2016.8.13
Lijin Wang, Jialin Hong. Generating functions for stochastic symplectic methods. Discrete & Continuous Dynamical Systems - A, 2014, 34 (3) : 1211-1228. doi: 10.3934/dcds.2014.34.1211
Michael Entov, Leonid Polterovich, Daniel Rosen. Poisson brackets, quasi-states and symplectic integrators. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1455-1468. doi: 10.3934/dcds.2010.28.1455
George Papadopoulos, Holger R. Dullin. Semi-global symplectic invariants of the Euler top. Journal of Geometric Mechanics, 2013, 5 (2) : 215-232. doi: 10.3934/jgm.2013.5.215
Per Christian Moan, Jitse Niesen. On an asymptotic method for computing the modified energy for symplectic methods. Discrete & Continuous Dynamical Systems - A, 2014, 34 (3) : 1105-1120. doi: 10.3934/dcds.2014.34.1105
Marie-Claude Arnaud. When are the invariant submanifolds of symplectic dynamics Lagrangian?. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 1811-1827. doi: 10.3934/dcds.2014.34.1811
Michael Khanevsky. Hofer's length spectrum of symplectic surfaces. Journal of Modern Dynamics, 2015, 9: 219-235. doi: 10.3934/jmd.2015.9.219
Guillermo Dávila-Rascón, Yuri Vorobiev. Hamiltonian structures for projectable dynamics on symplectic fiber bundles. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1077-1088. doi: 10.3934/dcds.2013.33.1077
Daniel Guan. Classification of compact homogeneous spaces with invariant symplectic structures. Electronic Research Announcements, 1997, 3: 52-54.
Juan Carlos Marrero, David Martín de Diego, Ari Stern. Symplectic groupoids and discrete constrained Lagrangian mechanics. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 367-397. doi: 10.3934/dcds.2015.35.367
Alexandra Monzner, Nicolas Vichery, Frol Zapolsky. Partial quasimorphisms and quasistates on cotangent bundles, and symplectic homogenization. Journal of Modern Dynamics, 2012, 6 (2) : 205-249. doi: 10.3934/jmd.2012.6.205
Álvaro Pelayo, San Vű Ngọc. First steps in symplectic and spectral theory of integrable systems. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3325-3377. doi: 10.3934/dcds.2012.32.3325
Fasma Diele, Carmela Marangi. Positive symplectic integrators for predator-prey dynamics. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2661-2678. doi: 10.3934/dcdsb.2017185
Chungen Liu Qi Wang | CommonCrawl |
BMC Bioinformatics
Multi-class segmentation of neuronal structures in electron microscopy images
Methodology article
Kendrick Cetina1,
José M. Buenaposada ORCID: orcid.org/0000-0002-4308-96532 &
Luis Baumela1
BMC Bioinformatics volume 19, Article number: 298 (2018) Cite this article
Serial block face scanning electron microscopy (SBFEM) is becoming a popular technology in neuroscience. We have seen in the last years an increasing number of works addressing the problem of segmenting cellular structures in SBFEM images of brain tissue. The vast majority of them is designed to segment one specific structure, typically membranes, synapses and mitochondria. Our hypothesis is that the performance of these algorithms can be improved by concurrently segmenting more than one structure using image descriptions obtained at different scales.
We consider the simultaneous segmentation of two structures, namely, synapses with mitochondria, and mitochondra with membranes. To this end we select three image stacks encompassing different SBFEM acquisition technologies and image resolutions. We introduce both a new Boosting algorithm to perform feature scale selection and the Jaccard Curve as a tool compare several segmentation results. We then experimentally study the gains in performance obtained when simultaneously segmenting two structures with properly selected image descriptor scales. The results show that by doing so we achieve significant gains in segmentation accuracy when compared to the best results in the literature.
Simultaneously segmenting several neuronal structures described at different scales provides voxel classification algorithms with highly discriminating features that significantly improve segmentation accuracy.
Understanding the structure, connectivity and functionality of the brain is one of the challenges faced by science in the 21st century. This grand challenge is supported by the development of multiple and complementary brain imaging modalities such as structural and functional imaging [1] and light microscopy [2, 3]. At the finest level, recent advances in SFBEM also support this long term goal [4–6]. They have made it possible to automatically acquire long sequences of high resolution images of the brain at the nanometer scale. However, the automated interpretation of these images is still an open challenge, because of their inherent intricacy and its huge size (see Fig. 1a).
Hippocampus rat neural tissue SBFEM image. a image stack; b 3D reconstruction of mitochondria in the stack; c ground truth labels of mitochondria (blue) and synapses (red) in the first slice. Membrane labels (green) have been included only for illustrative purposes
In this paper we consider the problem of segmenting mitochondria and synapses that along with membranes are some of the most prominent neuronal structures (see Fig. 1b and c). These structures are of interest to neuroscience. The identification and quantification of distribution of synapses provides fundamental information for the study of the brain [6]. Mitochondria, on the other hand, play a key role in the cell metabolism, physiology and pathologies [7]. The accurate segmentation and reconstruction of neuron membranes is requisite to address the neural circuit reconstruction problem [8, 9].
Given the complexity of the SBFEM images shown in Fig. 1, a fundamental step for a successful segmentation is a good feature representation. In recent years a broad range of image description features have been introduced in the literature [10]. SBFEM specialized features like Radon-like [11] and Ray features [12] along with various standard computer vision ones such as Histograms of Oriented Gradients, Local Binary Patterns and different banks of linear filters are the most usual representations [10, 13–17]. To further exploit contextual information the result of extracting these features at different scales is usually pooled in neighborhoods around the described voxels, as in [18] or in the integral channel features [19]. Tu and colleagues [20] use a variant of the integral channel features to segment brain 3-D magnetic resonance images. A related approach, termed context cues, was also used by Becker [14] and Lucchi [16] for segmenting synapses and mitochondria respectively.
Typical feature vectors have thousands of variables. For labeling these structures ensemble classification methods, in particular Boosting and random forests, are the most popular in EM segmentation approaches, since they can select the best subset of features on-the-fly, while training. Random Forest and Boosting classifiers like AdaBoost and GentleBoost have been used for segmenting synapses [13, 14, 21], membranes [22] and mitochondria [12, 16, 23].
Although there are some general software tools for segmenting neuronal structures [13, 24], the best results for synapses [14, 21, 25] and mitochondria [15, 16, 26, 27] have been achieved by algorithms specifically designed for each of them. In this paper we study whether we can improve the performance of these approaches by simultaneously segmenting more than one structure. In particular, we will concurrently segment synapses with mitochondria and mitochondria with membranes. Since these structures arise in SBFEM stacks with different sizes, we introduce a feature selection algorithm to determine the best scales to describe them. We compare our segmentations with those that target a single structure. To this end we select three image stacks and the segmentation algorithms that have reported the best performance in each of them. To make a fair evaluation we introduce a novel quality measure tool, the Jaccard Curve, enabling the comparison of several segmentation approaches independently of the selected operational point of the classifier.
Image features
We aim to use contextual information to label each voxel. To this end we use integral channel features based on extracting the sums over rectangular regions of a set of feature channels. We obtain these channels by computing a Gaussian Rotation Invariant MultiScale (GRIMS) descriptor and an elliptical descriptor at different scales. We choose GRIMS because they are an excellent descriptor for segmenting mitochondria and synapses [10, 28]. Since vesicles are a good indicator of the existence of synapses in the vicinity (see the raw image in Fig. 2), we also include an elliptical descriptor that provides contextual information related to the existence of vesicles.
Raw image and four image description features used in our methodology. In the raw image we highlight with blue and red color a synapse and three vesicles respectively. We can appreciate the elongated shape of the synapse and the small circular shape of the vesicles, next to the synapse
GRIMS descriptors apply to each image in the stack a set of linear Gaussian filters at different scales to compute zero, first and second order derivatives, { sijk:i+j+k≤2}, where
$$s_{ijk}=\sigma^{i+j+k}G_{\sigma}*\frac{\partial^{i+j+k}}{\partial x^{i}y^{j}z^{k}}, $$
Gσ is a Gaussian filter with standard deviation σ and ∗ is the convolution operator. We represent the result of applying these operators to the image with sijk where the summation of the subscript indices denotes the order of the derivatives. The rotation invariant feature vector at scale σ is given by \((s_{000}, \sqrt {s^{2}_{100} + s^{2}_{010} + s^{2}_{001}}, \lambda _{1}, \lambda _{2}, \lambda _{3})\), where the first component is the smoothed image, the second one is the magnitude of the gradient and λi,i=1…3 are the eigenvalues of the Hessian matrix. The complete feature vector is the concatenation of all partial feature vectors at different scales σj,j=1…n. Hence, the GRIMS vector has dimension 5n, being n the number of scales used to describe each voxel (see Fig. 2).
The elliptical descriptor is the result of filtering the image with an elliptic torus-like kernel. The shape of this kernel is controlled by the radii r1, r2 and thickness w parameters. As shown in Fig. 3, the result of convolving this kernel (left image) with a vesicle-like structure (central image) returns low values for the inner parts of the vesicle.
Elliptical descriptor. (left) image showing the kernel as grey values, with both radii(r1,r2) and the thickness (w) parameters over-imposed; (center) image of a vesicle-like structure; (right) response obtained when convolving the vesicle image with the elliptical descriptor kernel
Scales selection
Properly addressing the multi-scale nature of the structures in the SBFEM images is an important issue to achieve top segmentation performance. Synapses appear in our images with various sizes and shapes. Similarly, mitochondria show up as rougly elliptical structures with very different sizes (see Figs. 1 and 9). The information provided by the features defined in the previous section depends on the size of the image structures and the scales of the kernels used for filtering. We set the parameters of the elliptical descriptor as the average radii and width of a representative set of vesicles in the stack (see Table 1). However, for a given image stack, it is not clear what is the most discriminative set of GRIMS scales. An important step in our methodology is to establish them. The standard approach would optimize the segmentation performance using cross-validation over the set of scales. However, in our problem this is computationally prohibitive. To this end we introduce a new scale selection algorithm based on a generalization of the well-known AdaBoost-based greedy feature selection scheme [29] to the multi-class case. For this purpose we adapt PIBoost [30], a recently introduced multi-class boosting algorithm with binary weak-learners (see Algorithm 1).
Table 1 Vesicle descriptor parameters and GRIMS scales selected for each data set
Each PIBoost iteration learns a group of weak-learners that partially solve a multi-class classification problem. Each weak-learner separates a group of classes from the rest, learned as a binary problem in which one of the groups is treated as the positive class and the rest as negative. In this context a separator is a classifier formed by combining the minimal set of weak-learners that solve a multi-class problem (see Fig. 4). Each separator associates weights to training samples [30]. These weights focus the learning process on a different set of samples at each iteration thereby encouraging each weak-learner to be independent from the rest.
PIBoost separator in a three class problem. It is composed of three weak-learners (S1,S2,S3) separating each class, e.g. C1:mitochondrion, C2:synapse, C3:background, from the rest
For feature selection we modify this scheme producing a new algorithm (see Algorithm 1). The feature selection algorithm iterates over all GRIMS scales training each separator with one scale using the weighted training samples. In our case, since we consider the simultaneous segmentation of two structures, we have two positive and one negative classes, hence, each separator has associated three weak-learners (see Fig. 4). We classify the training data with each separator (GRIMS scale) and select the one with the smallest weighted error. Finally, with the error of the selected feature we update the weights of the training data according to the PIBoost scheme [30], so that the next selected scales are independent from those selected so far. Algorithm 1 shows this process, where the actual expression of functions train Sk(Fj), εj(Fj,Sk,k=1…3,Wi) and W(εmin) may be found in [30], Section 4. Table 1 shows the chosen scales for each data set. They were selected among 50 equally distributed values between 1 and 50. These are the scales used in the experiments in "Results and discussion" section. So, our feature vectors have 20 components (4 selected scales, times 5 features per scale)
Once selected the best features and scales, we aggregate local evidence by computing the integral channel features on them. In our approach we use cubic regions, as shown in Fig. 5b.
Feature extraction process. We apply a set of filters at scales σ1,…,σn to each stack slice. a two filters for one slice, s000 left, λ3 right; b a feature of voxel Vi is the sum of the values of one box (in blue). The feature vector of Vi is the concatenation of several hundreds of such features
So, the feature extraction process is as follows. For each voxel in the stack we obtain the channels by convolving it with the GRIMS and elliptical filters in a set of selected scales, σ1,…,σn (see Fig. 5a). A feature associated to one voxel is the sum of the filter responses in a random neighboring cube in a randomly selected channel (see Fig. 5b). In our approach we extract 1200 features for each voxel in the stack.
Multi-class boosting with integral channel features
We also adopt a Boosting scheme to label each voxel. Our classifier, Partially Informative Boosting (PIBoost) [30], is a multi-class generalization of AdaBoost with binary weak-learners.
At the m-th iteration and for each separator S, PIBoost builds a stage-wise additive model
$$\mathbf{f}_{m}(\mathbf{x})=\mathbf{f}_{m-1}(\mathbf{x})+\beta_{m}^{S}\mathbf{g}^{S}_{m}(\mathbf{x}), $$
where \(\mathbf {f}_{m}(\mathbf {x})\in \mathbb {R}^{c}\) is the strong learner and \(\mathbf {g}^{S}_{m}(\mathbf {x})\) the trained weak-learner at iteration m for separator S, \(\beta ^{S}_{m}\) is a constant related to the accuracy of the weak-learner, and c is the number of classes in the problem. Each component in the vector f(x) represents to what extent x belongs to each class. f(x) satisfies the sum-to-zero condition, f(x)⊤1=0, that guarantees that each vector takes one and only one value from the set of labels [30]. Finally, sample x is assigned to the class αi associated to the maximum component of f(x)
$$\mathbf{x}\in \alpha_{i} \Leftrightarrow i=\arg\max_{j}\mathbf{f}(\mathbf{x})[j], $$
where f(x)[j] denotes the j-th component of vector f(x).
We also use sub-sampling to optimize the classifier performance [31]. To this end, we train each weak-learner with a fraction of all training data. Sub-sampling reduces training time helps to generalize the classification. For the PIBoost experiments in "Results and discussion" section we train each weak-learner with 10% of the data randomly sampled according to their weights. This gives priority to instances that are hard to classify.
Label regularization
The output of this classification process is noisy. We filter this noise by optimizing the energy in a Markov Random Field (MRF) with pairwise terms using the graph cut algorithm [32]. Since the standard graph cut approach is only valid for binary problems, we solve our three class regularization problem in two ways. First by setting up two one positive class-against the rest problems. Second using the αβ-swap multi-class extension to graph cuts [33].
We define the weights of edges in the graph as follows. Let us denote with αy,y∈{mitochondrion, synapse,membrane,background} each of the class labels for a voxel. The unary term of voxel x for class αj, u(x,αj) is given by the minus log of its posterior probability, u(x,αj)=− logP(αj∣f(x)). Using the multinomial logistic expression we get
$$ {\begin{aligned} -\log P(\alpha_{j} \mid \mathbf{f}(\mathbf{x})) &= -\log\frac{e^{\mathbf{f}(\mathbf{x})[j]}}{\sum_{i} e^{\mathbf{f}(\mathbf{x})[i]}}\\ &= \log \sum_{i} e^{\mathbf{f}(\mathbf{x})[i]} - \mathbf{f}(\mathbf{x})[j]. \end{aligned}} $$
Since \(\log \sum _{i} e^{\mathbf {f}(\mathbf {x})[i]}\approx \max \{\mathbf {f}(\mathbf {x})\}\), then, the unary term weights are given by
$$ u(\mathbf{x},\alpha_{j})= \max\{\mathbf{f}(\mathbf{x})\}-\mathbf{f}(\mathbf{x})[j]. $$
For the pair-wise terms we train a new classifier that learns the probability that a voxel belongs to a border. Here a border is a thin strip around the edge of mitochondria and synapses. This is done by setting up a PIBoost-based classifier with only two classes αy,y∈{border,no_border}. The weight of the edge connecting neighboring voxels x and y, p(x,y), is given by
$$ p(\mathbf{x},\mathbf{y})=-\log P(\alpha_{border}|\mathbf{f}(\mathbf{x})) -\log P(\alpha_{border}|\mathbf{f}(\mathbf{y})). $$
Here we describe the experiments performed to evaluate the image segmentation method described in the previous section.
Quality measure
We use as measure of quality of a segmentation the Jaccard similarity coefficient between the ground truth and the result provided by the algorithm evaluated. It is a widely used image segmentation quality index both in the computer vision and bio-medical literature [15, 27]. It is defined as the area of the intersection divided by the area of the union of segmentations (see Fig. 6). In terms of classification results it can be expressed as
$$ JAC=\frac{TP}{TP+FP+FN}, $$
where TP stands for true positive, FP false positive and FN false negative. It represents a binary non-symmetric measure of coincidence of two segmentations. It takes values between 0 (no coincidence) and 1 (total coincidence). In our results Jaccard indices are computed from each positive class (mitochondria, synapses, membranes) versus the rest. Although this is the most usual way to show results, other works compute the average Jaccard index of positive and negative classes [27].
Jaccard similarity coefficient. (left) in blue the ground truth segmentation of a mitochondrion, in red the result obtained with an automated segmentation algorithm; TP is the intersection of red and blue regions, i.e. the correctly segmented piece of mitochondrion; FN is the only blue area, i.e. the part of mitochondrion segmented as background; FP is the only red area, i.e. the background segmented as mitocondrion; TN is the rest of the image, i.e. the correctly segmented background; (right) representation of the Jaccard coefficient as the relation between the purple and green areas
In binary classification problems a threshold value controls how posterior probabilities are converted into class labels. To compare the performance of two such classifiers independently of the threshold the Machine Learning community has long agreed on the use of Precision Recall (PR) or Receiver Operator Characteristic (ROC) curves instead of accuracy results [34]. Similarly, in image segmentation, simply comparing a Jaccard index may be inaccurate, since, for example, the same classification algorithm with a different classification threshold would exhibit different Jaccards.
Here we introduce the Jaccard Curve (JCC) as a means of comparing the performance of two segmentation algorithms independently of their classification threshold. In the horizontal axis of the JCC we represent the proportion of pixels below the positive class score threshold, i.e. the percentage of pixels in the image labeled as background. In the vertical one we plot the Jaccard of the segmentation obtained when labeling in the positive class all voxels with a score higher or equal to the threshold (see Fig. 7). We plot the JCC by sorting all voxels according to their score and evaluating the Jaccard of the segmentations at different thresholds. The higher the JCC curve, the better the segmentation.
Example of a Jaccard curve obtained with the results of the segmentation of two synapses. We selected a hard-to-segment patch to see the segmentation improvement with different thresholds. In the vertical axis we represent the Jaccard (JAC), whereas in the horizontal the percentage of pixels segmented as background. We can see how the higher the threshold (θ) the less elements are segmented as synapse
The evaluation of membrane segmentation using the Jaccard index has been criticized because it is commonly believed that small deviations in the detected membrane locations are acceptable, which however will cause large errors in the estimated Jaccard index. Alternative more robust metrics such as the Rand F-score(\({\mathcal {F}}_{r}\)) and Information Theoretic F-score(\({\mathcal {F}}_{it}\)) have been recently proposed [35]. For membrane segmentation we will also use these these metrics.
In our experiments we have used three serial section electron microscopy data sets comprising different labels, SBFEM acquisition technologies and levels of anisotropy (see Table 2). The first two stacks, Hippocampus and Somatosensory cortex, were acquired with FIB-SEM microscopes. The have synapses and mitochondria labels manually annotated by expert neuroanatomistsFootnote 1. The Hippocampus stack has perfectly isotropic voxels with very high resolution. The rat Somatosensory Cortex one has a coarser resolution with slightly an-isotropic voxels. Finally, the Cerebellum stackFootnote 2, was acquired with a SBF-SEM microscope. It has mitochondria and membranes labels with the largest anisotropy factor [28].
For our analysis we select the algorithms reporting the best results for each of the selected stacks. We compare our algorithm with the AdaBoost-based approach of Lucci et al. for segmenting mitochondria [36], and Becker et al. for segmenting synapses [14]. We also compare our algorithm with the Bayesian approach of Marquez et al. [28], that segments both structures. To this end, we use the code provided by the authors. For the experiments with AdaBoost we trained the algorithm with 1200 decision stumps based on context cues [14, 16]. For the Bayesian approach we trained a classifier with Gaussian class-conditional distributions and GRIMS features as described in [28]. Finally, for PIBoost we conducted 50 iterations training 150 decision tree weak-learners. The input to this classifier are pooled features in cubes of size 5×5×5 voxels computed on the channels extracted from GRIMS and elliptical descriptors on the set of selected scales, as described in "Methods" section. In all our experiments we used the first block of consecutive slices of each stack for training, and the rest for testing (see Table 2). Since the number of voxels from the background class is much larger than that of the two other positive classes, when training PIBoost we randomly discard half of the background voxels.
In Fig. 8 we show the JCC curves resulting from the segmentation of mitochondria, synapses and membranes in each of the previously described image stacks. Traditionally, segmentation results have been compared by choosing an appropriate classification threshold for the classifier and computing the Jaccard for the result. This is equivalent to selecting an operation point in the JCC. This operation point for Boosting algorithms is given by a sgn() function, i.e. a zero classification threshold [31], whereas in the Bayesian classifier case it is a Maximum a Posteriori (MAP) rule [31]. In the first two columns of Table 3 we give the Jaccard index resulting from segmenting the image at this operation point. Moreover, in each curve in Fig. 8 we show with a red dot the operation point for each classifier. In some circumstances, such as for example when classes are very unbalanced, the zero threshold of Boosting algorithms may be fine-tuned [29]. This threshold is an important parameter for reproducibility and should only be estimated on a separate validation set, never in the test set. In our analysis we do not adjust it since the JCC already provides information for all thresholds.
Mitochondria, synapses and membranes JCCs for the Hippocampus (a), Somatosensory (b) and Cerebellum (c) SBFEM image stacks. In each curve we show with a red dot the zero threshold operating point for Boosting classifiers and MAP point for the Bayesian one. JCCs let us compare the segmentation performance regardless of the operating point. The higher the curve the better the segmentation algorithm
Segmentation results for each stack and algorithm after regularization. Best qualitative appreciation of segmentation differences among algorithms and image stacks by zooming in the electronic version of the paper
Table 3 Quantitative segmentation results for mitochondria, synapses and membrane on the Hippocampus, Somatosensory Cortex and Cerebellum stacks evaluated with the Jaccard index
From the analysis of the JCC curves in Fig. 8 we can see that, in general, the approach presented in this paper, based on the PIBoost classification algorithm and a set of pooled GRIMS features, achieves equal or superior performance to all other approaches in all stacks and structures.
In the segmentation of both mitochondria and synapses context information plays a key role. For this reason the AdaBoost and PIBoost approaches, both based on pooled channel features, achieve the best performance on both structures in the perfectly isotropic Hippocampus stack (see Fig. 8a and Table 3). The Somatosensory and Cerebellum stacks are increasingly an-isotropic. In this case most of the close context information across slices is lost and pooled channel features become less informative. Hence, segmentation performance degrades, specially for mitochondria. However, since GRIMS channels acquired at different scales also provide some local context information, the segmentation algorithm based on PIBoost degrades to a lesser extent (see Table 3).
Finally, the segmentation of membranes on the most an-isotropic Cerebellum stack (see Fig. 8c) does not depend on context but on local appearance. This is due to the fact that membranes are distributed all over the stack and have very different context. In this case, the algorithms based on GRIMS, PIBoost and Bayesian approaches, provide the best performance in the classification stage. We have also evaluated membrane segmentation results using the Rand F-score(\({\mathcal {F}}_{r}\)) and the Information Theoretic F-score(\({\mathcal {F}}_{it}\)) [35] (see Table 4). Here again the approach based on combining the simultaneous segmentation of the two possible classes, PIBoost, outperforms the rest.
Table 4 Quantitative segmentation results for membrane Cerebellum stack evaluated with the Rand F-score(\({\mathcal {F}}_{r}\)) and Information Theoretic F-score(\({\mathcal {F}}_{it}\)) metrics
After classifying each voxel we regularize the resulting labels with two standard graph-cut-based algorithms. This regularization usually boost the performance and visually improves the results for large and regular regions such as mitochondria (see Fig. 9). However, with thin and elongated structures like synapses and membranes, graph-cut regularization can be detrimental. This may be appreciated in the regularized results for membranes in Table 3. Some approaches use a regularization scheme that has been specifically conceived for the segmentation problem addressed. This is the case, for example, of the regularization approach used for segmenting mitochondria in [16]. In the case of multi-class classifiers like PIBoost and Bayesian, our proposed regularization scheme uses both the multi-class αβ-swap algorithm and, by posing it as two bi-class problems, the two-class graph-cut. The former is similar to the regularization used in [28]. For the two-class AdaBoost algorithm we use a graph-cut-based regularizer. Analyzing the segmentation results before and after regularization lets us make a fair comparison with [14], that uses no regularizer. However, as discussed above, for mythocondria segmentation, the approach in [16] uses a different type of regularizer. With this specially conceived regularization scheme [16] achieves on the Hippocampus stack a Jaccard of 0.74, slightly better than the result with our standard grap-cut-based scheme, but still behind 0.76 achieved with PIBoost using αβ-swap (see first row in Table 3).
In the next experiment we further analyze the reason why our algorithm achieves a good segmentation accuracy. To this end we select the Hippocampus stack, for which the AdaBoost-based approach of Lucci et al. [16] is the state-of-the-art for mithocondria segmentation. The first row in Table 5 shows the results for the AdaBoost classifier with the integral channel features on the set of channels described in [16] with a zero-threshold classification. Only by changing the channels to GRIMS and the elliptical descriptor, we get a small improvement for mitochondria and an large improvement in synapses. This is due to the fact that the elliptical descriptor and GRIMS features provide multi-scale information fundamental for the estimation of synapses, specifically those near the borders of the stack. By changing the two-class classifier for a multi-class Boosting approach we get a new improvement in performance for both structures, as shown in the third row of Table 5. The final boost in performance for our approach comes from selecting the best GRIMS scales, as shown in the last row.
Table 5 Hippocampus segmentation results (Jaccard) for different classifier, features and GRIMS scales
The results for synapses segmentation in the Hippocampus stack in Table 5 are worse than those in [14]. Here we analyze this discrepancy. The poor result in our experiment is caused by two factors. First and foremost, the fact that the zero threshold operation point is particularly harmful for this problem (see Fig. 8a). The reason for this is that AdaBoost performs poorly on highly imbalanced classification problems, such as synapses segmentation. PIBoost, on the other hand, achieves better performance because it was conceived to address the imbalanced situations arising in multi-class classification [30]. Second, the information provided by the context cues features degrades in those voxels near the borders of the stack, since many of the cubes straddle the stack limits (Fig. 5b graphically depicts this problem). This is in part alleviated by the local information provided by the GRIMS. To evaluate the impact of these issues on the results we peel off from the Jaccard index computation in the stack the 10 voxels thick outer rind. In this case the Jaccard increases to 0.37. If we further overfit the test data and select the best operation point, the performance goes up to 0.54, comparable with that in [14].
Concerning the computational cost at run-time, the multi-class Boosting approach is computationally more efficient than the AdaBoost binary solution. However, the Bayesian approach is, by a large margin, the fastest algorithm. We have made these performance experiments on a computer with an Intel Xeon CPU at 2.40 GHz, 96 GB RAM. The PIBoost solution involves 3 separators, composed of 50 trees of depth 10. So, the classification of one voxel involves 3 separators ×50 trees in each separator ×10 inner node decisions in each tree. This is a total of 10×50×3=1500 image measurements to classify one voxel. This classifier takes 90.4 min to label the test images in the Hippocampus stack. The AdaBoost binary solution is based on two classifiers, one for each positive class, each composed of 1200 decision stumps. Hence, to classify one voxel we have to sample 1200×2=2400 image values. This means that his classifier requires 128.7 min to label the test images in the Hippocampus stack.
The Bayesian approach trains a multi-class classifier with a feature vector of 5 features ×4 scales, 5×4=20 features. It uses 2.7 min to label the test images in the Hippocampus stack.
In this paper we have presented an algorithm for segmenting mitochondria, synapses and membranes in SBFEM images of brain tissue.
We have shown that the segmentation accuracy in SBFEM images can be improved by simultaneously analyzing several neuronal structures. We successfully tackled this problem using PIBoost [30], a boosting algorithm for class-imbalanced problems.
We have also verified that when the set of segmented structures have different sizes, selecting a good set of scales for image description significantly improves the segmentation accuracy. To this end we have introduced a new multi-class feature selection algorithm.
Following previous results in the literature [14, 16, 28], we have also confirmed the importance of context for segmenting neuronal structures. Although pooled channels with standard image features [14, 16] provide excellent performance in the central part of an isotropic stack, we have proved that GRIMS provide better overall performance both in isotropic and anisotropic stacks due to their capacity to represent multi-scale information.
Considering the computational cost of the classifiers, if accuracy is the main requirement in the segmentation process, then PIBoost should be the selected classifier, since it provides the best accuracy at computational cost lower than AdaBoost. However, if computational efficiency is the main issue, then the statistical approach in [28] is, by a far margin, the fastest.
The results in this paper are relevant to the neuroscience research community when confronting the reconstruction of the "synaptome" [9]. Firstly, because the methodology introduced in the paper is general and may be applied to segment different neuronal structures, possibly using other imaging modalities. Secondly, because when the number of neuronal structures to segment grows, if the segmentation problem is addressed one structure at a time, the computational requirements also grow, at least, linearly. However, using a simultaneous segmentation approach, the number of required features and, hence, the computational cost, increases at a slower pace, since many of these features may be shared by several structures. Moreover, since these features are selected to discriminate among a large group of structures they are more general and also achieve better segmentation accuracy, as we have confirmed in our experiments.
The Hippocampus stack was annotated at the École Polytechnique Fédérale de Lausanne (EPFL) under the supervision of Prof. Graham Knott. The Somatosensory stack was annotated at the Cajal Cortical Circuits laboratory, Universidad Politécnica de Madrid (UPM), under the supervision of Prof. Javier de Felipe.
Table 2 Data sets used in the experiments
More information online in the Cell Centered Database (http://ccdb.ucsd.edu) with ID 8192.
GRIMS:
Gaussian Rotation Invariant MultiScale descriptors (described in the paper)
JCC:
Jaccard Curve (defined in the paper)
MRF:
Markov Random Field
Recision-Recall curve
ROC:
Receiver Operator Charateristic curve
SBFEM:
Serial Block Face Electron Microscopy
Gordon E. Brain imaging technologies: How, what, when and why? Aust N Z J Psychiatr. 1999; 33(2):187–96. https://doi.org/10.1046/j.1440-1614.1999.00557.x.
Wilt BA, Burns LD, Ho ETW, Ghosh KK, Mukamel EA, Schnitzer MJ. Advances in light microscopy for neuroscience. Ann Rev Neurosci. 2009; 32(435). https://doi.org/10.1146/annurev.neuro.051508.135540.
Pang J, Özkucur N, Ren M, Kaplan DL, Levin M, Miller EL. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images. Biomed Opt Express. 2015; 6(11):4395–416. https://doi.org/10.1364/BOE.6.004395.
Denk W, Horstmann H. Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure. PLoS Biol. 2004; 2(11):329.
Knott G, Marchman H, Wall D, Lich B. Serial section scanning electron microscopy of adult brain tissue using focused ion beam milling. J Neurosci. 2008; 28(12):2959–64.
Merchán-Pérez A, Rodríguez J-R, AlonsoNanclares L, Schertel A, DeFelipe J. Counting synapses using FIB/SEM microscopy: a true revolution for ultrastructural volume reconstruction. Front Neuroanat. 2009; 3:18.
Campello S, Scorrano L. Mitochondrial shape changes: orchestrating cell pathophysiology. EMBO Rep. 2010; 11(9):678–84.
Article PubMed PubMed Central CAS Google Scholar
Jain V, Murray JF, Roth F, Turaga S, Zhigulin V, Briggman KL, Helmstaedter MN, Denk W, Seung HS. Supervised learning of image restoration with convolutional networks. In: IEEE International Conference on Computer Vision. ICCV: 2007. p. 1–8. https://doi.org/10.1109/ICCV.2007.4408909.
DeFelipe J. From the connectome to the synaptome: An epic love story. Science. 2010; 330(6008):1198–201.
Cetina K, Márquez-Neila P, Baumela L. A comparative study of feature descriptors for mitochondria and synapse segmentation. In: IEEE International Conference on Pattern Recognition. ICPR: 2014. p. 3215–20. https://doi.org/10.1109/ICPR.2014.554.
Kumar R, Vázquez-Reina A, Pfister H. Radon-like features and their application to connectomics. In: Proc. Int. Conference on Computer Vision and Pattern Recognition Workshops. CVPRW: 2010. p. 186–93. https://doi.org/10.1109/CVPRW.2010.5543594.
Smith K, Carleton A, Lepetit V. Fast ray features for learning irregular shapes. In: IEEE International Conference on Computer Vision. ICCV: 2009. p. 397–404. https://doi.org/10.1109/ICCV.2009.5459210.
Sommer C, Straehle C, Kothe U, Hamprecht F. Ilastik: Interactive learning and segmentation toolkit. In: IEEE International Symposium on Biomedical Imaging: From Nano to Macro2011. p. 230–3. https://doi.org/10.1109/ISBI.2011.5872394.
Becker C, Ali K, Knott G, Fua P. Learning context cues for synapse segmentation. IEEE Trans Med Imaging. 2012; 31(2):474–86.
Márquez-Neila P, Kohli P, Rother C, Baumela L. Non-parametric higher-order random fields for image segmentation. In: European Conference on Computer Vision, ECCV 2014. LNCS 8694. Springer: 2014. p. 269–84. https://doi.org/10.1007/978-3-319-10599-4_18.
Lucchi A, Becker C, Neila PM, Fua P. Exploiting enclosing membranes and contextual cues for mitochondria segmentation. In: Medical Image Computing and Computer-Assisted Intervention, MICCAI 2014. LNCS 8673. Springer: 2014. p. 65–72. https://doi.org/10.1007/978-3-319-10404-1_9.
Seyedhosseini M, Tasdizen T. Multi-class multi-scale series contextual model for image segmentation. Trans Image Process. 2013; 22(11):4486–96.
Criminisi A, Robertson D, Konukoglu E, Shotton J, Pathak S, White S, Siddiqui K. Regression forests for efficient anatomy detection and localization in computed tomography scans. Med Image Anal. 2013; 17(8):1293–303.
Dollár P, Tu Z, Perona P, Belongie S. Integral channel features. In: Proceedings British Machine Vision Conference, BMVC. BMVA Press: 2009. p. 91.1–91.11. https://doi.org/10.5244/C.23.91.
Tu Z, Narr KL, Dollár P, Dinov I, Thompson PM, Toga AW. Brain anatomical structure segmentation by hybrid discriminative/generative models. Trans Med Imaging. 2008; 27(4):495–508.
Kreshuk A, Straehle CN, Sommer C, Koethe U, Knott G, Hamprecht F. Automated segmentation of synapses in 3D EM data. In: IEEE International Symposium on Biomedical Imaging: From Nano to Macro.2011. p. 220–3. https://doi.org/10.1109/ISBI.2011.5872392.
Kaynig V, Fuchs T, Buhmann JM. Neuron geometry extraction by perceptual grouping in ssTEM images. In: International Conference on Computer Vision and Pattern Recognition (CVPR)2010. p. 2902–9. https://doi.org/10.1109/CVPR.2010.5540029.
Vitaladevuni S, Mishchenko Y, Genkin A, Chklovskii D, Harris K. Mitochondria detection in electron microscopy images. In: Workshop on Microscopic Image Analysis with Applications in Biology.2008.
Morales J, Alonso-Nanclares L, Rodríguez J-R, DeFelipe J, Rodríguez A, Merchán-Pérez Á. Espina: a tool for the automated segmentation and counting of synapses in large stacks of electron microscopy images. Front Neuroanat.2011;5. https://doi.org/10.3389/fnana.2011.00018.
Kreshuk A, Straehle CN, Sommer C, Koethe U, Cantoni M, Knott G, Hamprecht F. Automated detection and segmentation of synaptic contacts in nearly isotropic serial electron microscopy images. PloS ONE. 2011; 6(10):24899.
Giuly R, Martone M, Ellisman M. Method: Automatic segmentation of mitochondria utilizing patch classification, contour pair classification, and automatically seeded level sets. BMC Bioinformatics. 2012;13(29). https://doi.org/10.1186/1471-2105-13-29.
Lucchi A, Smith K, Achanta R, Knott G, Fua P. Supervoxel-based segmentation of mitochondria in em image stacks with learned shape features. IEEE Trans Med Imaging. 2012; 31(2):474–86.
Neila PM, Baumela L, González-Soriano J, Rodríguez J-R, DeFelipe J, Merchán-Pérez Á. A fast method for the segmentation of synaptic junctions and mitochondria in serial electron microscopic images of the brain. Neuroinformatics. 2016; 14(2):235–50.
Viola PA, Jones MJ. Robust real-time face detection. Int J Comput Vis. 2004; 57(2):137–54.
Fernández-Baldera A, Baumela L. Multi-class boosting with asymmetric binary weak-learners. Pattern Recognit. 2014; 47(5):2080–90.
Hastie TJ, Tibshirani RJ, Friedman JH. The Elements of Statistical Learning : Data Mining, Inference, and PredictionSpringer; 2009.
Boykov Y, Funka-Lea G. Graph cuts and efficient N-D image segmentation. Int J Comput Vis. 2006; 70(2):109–31.
Boykov Y, Veksler O, Zabih R. Fast approximate energy minimization via graph cuts. Trans Pattern Anal Mach Intell. 2001; 23(11):1222–39. https://doi.org/10.1109/34.969114.
Provost FJ, Fawcett T, Kohavi R. The case against accuracy estimation for comparing induction algorithms. In: International Conference on Machine Learning, ICML.San Francisco: Morgan Kaufmann Publishers Inc.: 1998. p. 445–53.
Arganda-Carreras I, Turaga SC, Berger DR, Cireşan D, Giusti A, Gambardella LM, Schmidhuber J, Laptev D, Dwivedi S, Buhmann JM, Liu T, Seyedhosseini M, Tasdizen T, Kamentsky L, Burget R, Uher V, Tan X, Sun C, Pham TD, Bas E, Uzunbas MG, Cardona A, Schindelin J, Seung HS. Crowdsourcing the creation of image segmentation algorithms for connectomics. Front Neuroanat. 2015; 9:142.
Lucchi A, Márquez-Neila P, Becker C, Li Y, Smith K, Knott G, Fua P. Learning structured models for segmentation of 2-d and 3-d imagery. Trans Med Imaging. 2015; 34(5):1096–110.
The authors are grateful to Pablo Márquez and Carlos Becker for providing their code. Also to Pascal Fua and Graham Knott at EPFL and Ángel Merchán and Javier de Felipe at UPM and for sharing the segmented Hippocampus and Somatosensory image stacks. Finally they also thank CONACYT México and the Spanish Ministerio de Economía y Competitividad, under project TIN2016-75982-C2-2-R, for funding this research and Cesvima for providing computational support.
KC is founded by CONACYT México. All the authors are founded by the Spanish Ministerio de Economía y Competitividad, under project TIN2016-75982-C2-2-R.
The cerebellum stack images can be found at Cell Centered Database with ID 8192: http://ccdb.ucsd.edu/sand/main?mpid=8192%26event=displaySum.
The segmented Somatosensory image stack has been provided by Ángel Merchán and Javier de Felipe at Universidad Politécnica de Madrid (UPM).
The segmented Hippocampus image stack has been provided by Pascal Fua and Graham Knott at EPFL.
Departamento de Inteligencia Artificial, Universidad Politécnica de Madrid, Campus de Montegancedo s/n, Boadilla del Monte, España, Madrid, 28660, Spain
Kendrick Cetina & Luis Baumela
ETSII, Universidad Rey Juan Carlos, C/ Tulipán, s/n, Móstoles, 28933, Spain
José M. Buenaposada
Kendrick Cetina
Luis Baumela
KC implemented the algorithm, conducted experiments, and wrote the manuscript. JMB and LB proposed the idea, oversaw the experiments and wrote the manuscript. All authors read and approved the final manuscript.
Correspondence to José M. Buenaposada.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Cetina, K., Buenaposada, J.M. & Baumela, L. Multi-class segmentation of neuronal structures in electron microscopy images. BMC Bioinformatics 19, 298 (2018). https://doi.org/10.1186/s12859-018-2305-0
Multi-class boosting
Neuron structures
Submission enquiries: [email protected] | CommonCrawl |
Toda bracket
In mathematics, the Toda bracket is an operation on homotopy classes of maps, in particular on homotopy groups of spheres, named after Hiroshi Toda, who defined them and used them to compute homotopy groups of spheres in (Toda 1962).
Definition
See (Kochman 1990) or (Toda 1962) for more information. Suppose that
$W{\stackrel {f}{\ \to \ }}X{\stackrel {g}{\ \to \ }}Y{\stackrel {h}{\ \to \ }}Z$
is a sequence of maps between spaces, such that the compositions $g\circ f$ and $h\circ g$ are both nullhomotopic. Given a space $A$, let $CA$ denote the cone of $A$. Then we get a (non-unique) map
$F\colon CW\to Y$
induced by a homotopy from $g\circ f$ to a trivial map, which when post-composed with $h$ gives a map
$h\circ F\colon CW\to Z$.
Similarly we get a non-unique map $G\colon CX\to Z$ induced by a homotopy from $h\circ g$ to a trivial map, which when composed with $C_{f}\colon CW\to CX$, the cone of the map $f$, gives another map,
$G\circ C_{f}\colon CW\to Z$.
By joining these two cones on $W$ and the maps from them to $Z$, we get a map
$\langle f,g,h\rangle \colon SW\to Z$
representing an element in the group $[SW,Z]$ of homotopy classes of maps from the suspension $SW$ to $Z$, called the Toda bracket of $f$, $g$, and $h$. The map $\langle f,g,h\rangle $ is not uniquely defined up to homotopy, because there was some choice in choosing the maps from the cones. Changing these maps changes the Toda bracket by adding elements of $h[SW,Y]$ and $[SX,Z]f$.
There are also higher Toda brackets of several elements, defined when suitable lower Toda brackets vanish. This parallels the theory of Massey products in cohomology.
The Toda bracket for stable homotopy groups of spheres
The direct sum
$\pi _{\ast }^{S}=\bigoplus _{k\geq 0}\pi _{k}^{S}$
of the stable homotopy groups of spheres is a supercommutative graded ring, where multiplication (called composition product) is given by composition of representing maps, and any element of non-zero degree is nilpotent (Nishida 1973).
If f and g and h are elements of $\pi _{\ast }^{S}$ with $f\cdot g=0$ and $g\cdot h=0$, there is a Toda bracket $\langle f,g,h\rangle $ of these elements. The Toda bracket is not quite an element of a stable homotopy group, because it is only defined up to addition of composition products of certain other elements. Hiroshi Toda used the composition product and Toda brackets to label many of the elements of homotopy groups. Cohen (1968) showed that every element of the stable homotopy groups of spheres can be expressed using composition products and higher Toda brackets in terms of certain well known elements, called Hopf elements.
The Toda bracket for general triangulated categories
In the case of a general triangulated category the Toda bracket can be defined as follows. Again, suppose that
$W{\stackrel {f}{\ \to \ }}X{\stackrel {g}{\ \to \ }}Y{\stackrel {h}{\ \to \ }}Z$
is a sequence of morphism in a triangulated category such that $g\circ f=0$ and $h\circ g=0$. Let $C_{f}$ denote the cone of f so we obtain an exact triangle
$W{\stackrel {f}{\ \to \ }}X{\stackrel {i}{\ \to \ }}C_{f}{\stackrel {q}{\ \to \ }}W[1]$
The relation $g\circ f=0$ implies that g factors (non-uniquely) through $C_{f}$ as
$X{\stackrel {i}{\ \to \ }}C_{f}{\stackrel {a}{\ \to \ }}Y$
for some $a$. Then, the relation $h\circ a\circ i=h\circ g=0$ implies that $h\circ a$ factors (non-uniquely) through W[1] as
$C_{f}{\stackrel {q}{\ \to \ }}W[1]{\stackrel {b}{\ \to \ }}Z$
for some b. This b is (a choice of) the Toda bracket $\langle f,g,h\rangle $ in the group $\operatorname {hom} (W[1],Z)$.
Convergence theorem
There is a convergence theorem originally due to Moss[1] which states that special Massey products $\langle a,b,c\rangle $ of elements in the $E_{r}$-page of the Adams spectral sequence contain a permanent cycle, meaning has an associated element in $\pi _{*}^{s}(\mathbb {S} )$, assuming the elements $a,b,c$ are permanent cycles[2]pg 18-19. Moreover, these Massey products have a lift to a motivic Adams spectral sequence giving an element in the Toda bracket $\langle \alpha ,\beta ,\gamma \rangle $ in $\pi _{*,*}$ for elements $\alpha ,\beta ,\gamma $ lifting $a,b,c$.
References
1. Moss, R. Michael F. (1970-08-01). "Secondary compositions and the Adams spectral sequence". Mathematische Zeitschrift. 115 (4): 283–310. doi:10.1007/BF01129978. ISSN 1432-1823. S2CID 122909581.
2. Isaksen, Daniel C.; Wang, Guozhen; Xu, Zhouli (2020-06-17). "More stable stems". arXiv:2001.04511 [math.AT].
• Cohen, Joel M. (1968), "The decomposition of stable homotopy.", Annals of Mathematics, Second Series, 87 (2): 305–320, doi:10.2307/1970586, JSTOR 1970586, MR 0231377, PMC 224450, PMID 16591550.
• Kochman, Stanley O. (1990), "Toda brackets", Stable homotopy groups of spheres. A computer-assisted approach, Lecture Notes in Mathematics, vol. 1423, Berlin: Springer-Verlag, pp. 12–34, doi:10.1007/BFb0083797, ISBN 978-3-540-52468-7, MR 1052407.
• Nishida, Goro (1973), "The nilpotency of elements of the stable homotopy groups of spheres", Journal of the Mathematical Society of Japan, 25 (4): 707–732, doi:10.2969/jmsj/02540707, ISSN 0025-5645, MR 0341485.
• Toda, Hiroshi (1962), Composition methods in homotopy groups of spheres, Annals of Mathematics Studies, vol. 49, Princeton University Press, ISBN 978-0-691-09586-8, MR 0143217.
| Wikipedia |
Welcome to ShortScience.org!
ShortScience.org is a platform for post-publication discussion aiming to improve accessibility and reproducibility of research ideas.
The website has 1435 public summaries, mostly in machine learning, written by the community and organized by paper, conference, and year.
Reading summaries of papers is useful to obtain the perspective and insight of another reader, why they liked or disliked it, and their attempt to demystify complicated sections.
Also, writing summaries is a good exercise to understand the content of a paper because you are forced to challenge your assumptions when explaining it.
Finally, you can keep up to date with the flood of research by reading the latest summaries on our Twitter and Facebook pages.
Popular (Today)
arxiv-sanity.com
scholar.google.com
An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks
Ian J. Goodfellow and Mehdi Mirza and Da Xiao and Aaron Courville and Yoshua Bengio
arXiv e-Print archive - 2013 via Local arXiv
Keywords: stat.ML, cs.LG, cs.NE
First published: 2013/12/21 (6 years ago)
Abstract: Catastrophic forgetting is a problem faced by many machine learning models and algorithms. When trained on one task, then trained on a second task, many machine learning models "forget" how to perform the first task. This is widely believed to be a serious problem for neural networks. Here, we investigate the extent to which the catastrophic forgetting problem occurs for modern neural networks, comparing both established and recent gradient-based training algorithms and activation functions. We also examine the effect of the relationship between the first task and the second task on catastrophic forgetting. We find that it is always best to train using the dropout algorithm--the dropout algorithm is consistently best at adapting to the new task, remembering the old task, and has the best tradeoff curve between these two extremes. We find that different tasks and relationships between tasks result in very different rankings of activation function performance. This suggests the choice of activation function should always be cross-validated.
[link] Summary by Andrea Walter Ruggerini 4 months ago
The paper discusses and empirically investigates by empirical testing the effect of "catastrophic forgetting" (**CF**), i.e. the inability of a model to perform a task it was previously trained to perform if retrained to perform a second task.
An illuminating example is what happens in ML systems with convex objectives: regardless of the initialization (i.e. of what was learnt by doing the first task), the training of the second task will always end in the global minimum, thus totally "forgetting" the first one.
Neuroscientific evidence (and common sense) suggest that the outcome of the experiment is deeply influenced by the similarity of the tasks involved. Namely, if (i) the two tasks are *functionally identical but input is presented in a different format* or if (ii) *tasks are similar* and the third case for (iii) *dissimilar tasks*.
Relevant examples may be provided respectively by (i) performing the same image classification task starting from two different image representations as RGB or HSL, (ii) performing image classification tasks with semantically similar as classifying two similar animals and (iii) performing a text classification followed by image classification.
The problem is investigated by an empirical study covering two methods of training ("SGD" and "dropout") combined with 4 activations functions (logistic sigmoid, RELU, LWTA, Maxout). A random search is carried out on these parameters.
From a practitioner's point of view, it is interesting to note that dropout has been set to 0.5 in hidden units and 0.2 in the visible one since this is a reasonably well-known parameter.
## Why the paper is important
It is apparently the first to provide a systematic empirical analysis of CF. Establishes a framework and baselines to face the problem.
## Key conclusions, takeaways and modelling remarks
* dropout helps in preventing CF
* dropout seems to increase the optimal model size with respect to the model without dropout
* choice of activation function has a less consistent effect than dropout\no dropout choice
* dissimilar task experiment provides a notable exception of then dissimilar task experiment
* the previous hypothesis that LWTA activation is particularly resistant to CF is rejected (even if it performs best in the new task in the dissimilar task pair the behaviour is inconsistent)
* choice of activation function should always be cross-validated
* If computational resources are insufficient for cross-validation the combination dropout + maxout activation function is recommended.
Temporal Cycle-Consistency Learning
Dwibedi, Debidatta and Aytar, Yusuf and Tompson, Jonathan and Sermanet, Pierre and Zisserman, Andrew
arXiv e-Print archive - 2019 via Local Bibsonomy
Keywords: dblp
[link] Summary by jerpint 4 months ago
This paper presents a novel way to align frames in videos of similar actions temporally in a self-supervised setting. To do so, they leverage the concept of cycle-consistency. They introduce two formulations of cycle-consistency which are differentiable and solvable using standard gradient descent approaches. They name their method Temporal Cycle Consistency (TCC). They introduce a dataset that they use to evaluate their approach and show that their learned embeddings allow for few shot classification of actions from videos.
Figure 1 shows the basic idea of what the paper aims to achieve. Given two video sequences, they wish to map the frames that are closest to each other in both sequences. The beauty here is that this "closeness" measure is defined by nearest neighbors in an embedding space, so the network has to figure out for itself what being close to another frame means. The cycle-consistency is what makes the network converge towards meaningful "closeness".

# Cycle Consistency
Intuitively, the concept of cycle-consistency can be thought of like this: suppose you have an application that allows you to take the photo of a user X and increase their apparent age via some transformation F and decrease their age via some transformation G. The process is cycle-consistent if you can age your image, then using the aged image, "de-age" it and obtain something close to the original image you took; i.e. F(G(X)) ~= X.
In this paper, cycle-consistency is defined in the context of nearest-neighbors in the embedding space. Suppose you have two video sequences, U and V. Take a frame embedding from U, u_i, and find its nearest neighbor in V's embedding space, v_j. Now take the frame embedding v_j and find its closest frame embedding in U, u_k, using a nearest-neighbors approach. If k=i, the frames are cycle consistent. Otherwise, they are not. The authors seek to maximize cycle consistency across frames.
# Differentiable Cycle Consistency
The authors present two differentiable methods for cycle-consistency; cycle-back classification and cycle-back regression.
In order to make their cycle-consistency formulation differentiable, the authors use the concept of soft nearest neighbor:

## cycle-back classification
Once the soft nearest neighbor v~ is computed, the euclidean distance between v~ and all u_k is computed for a total of N frames (assume N frames in U) in a logit vector x_k and softmaxed to a prediction ŷ = softmax(x):
.

Note the clever use of the negative, which will ensure the softmax selects for the highest distance. The ground truth label is a one-hot vector of size 1xN where position i is set to 1 and all others are set to 0. Cross-entropy is then used to compute the loss.
## cycle-back regression
The concept is very similar to cycle-back classification up to the soft nearest neighbor calculation. However the similarity metric of v~ back to u_k is not computed using euclidean distance but instead soft nearest neighbor again:

The idea is that they want to penalize the network less for "close enough" guesses. This is done by imposing a gaussian prior on beta centered around the actual closest neighbor i.

The following figure summarizes the pipeline:

# Datasets
All training is done in a self-supervised fashion. To evaluate their methods, they annotate the Pouring dataset, which they introduce and the Penn Action dataset. To annotate the datasets, they limit labels to specific events and phases between phases

The network consists of two parts: an encoder network and an embedder network.
## Encoder
They experiment with two encoders:
* ResNet-50 pretrained on imagenet. They use conv_4 layer's output which is 14x14x1024 as the encoder scheme.
* A Vgg-like model from scratch whose encoder output is 14x14x512.
## Embedder
They then fuse the k following frame encodings along the time dimension, and feed it to an embedder network which uses 3D Convolutions and 3D pooling to reduce it to a 1x128 feature vector. They find that k=2 works pretty well.
Mask R-CNN
He, Kaiming and Gkioxari, Georgia and Dollár, Piotr and Girshick, Ross B.
[link] Summary by Qure.ai 2 years ago
Mask RCNN takes off from where Faster RCNN left, with some augmentations aimed at bettering instance segmentation (which was out of scope for FRCNN). Instance segmentation was achieved remarkably well in *DeepMask* , *SharpMask* and later *Feature Pyramid Networks* (FPN).
Faster RCNN was not designed for pixel-to-pixel alignment between network inputs and outputs. This is most evident in how RoIPool , the de facto core operation for attending to instances, performs coarse spatial quantization for feature extraction. Mask RCNN fixes that by introducing RoIAlign in place of RoIPool.
#### Methodology
Mask RCNN retains most of the architecture of Faster RCNN. It adds the a third branch for segmentation. The third branch takes the output from RoIAlign layer and predicts binary class masks for each class.
##### Major Changes and intutions
**Mask prediction**
Mask prediction segmentation predicts a binary mask for each RoI using fully convolution - and the stark difference being usage of *sigmoid* activation for predicting final mask instead of *softmax*, implies masks don't compete with each other. This *decouples* segmentation from classification. The class prediction branch is used for class prediction and for calculating loss, the mask of predicted loss is used calculating Lmask.
Also, they show that a single class agnostic mask prediction works almost as effective as separate mask for each class, thereby supporting their method of decoupling classification from segmentation
**RoIAlign**
RoIPool first quantizes a floating-number RoI to the discrete granularity of the feature map, this quantized RoI is then subdivided into spatial bins which are themselves quantized, and finally feature values covered by each bin are aggregated (usually by max pooling). Instead of quantization of the RoI boundaries
or bin bilinear interpolation is used to compute the exact values of the input features at four regularly sampled locations in each RoI bin, and aggregate the result (using max or average).
**Backbone architecture**
Faster RCNN uses a VGG like structure for extracting features from image, weights of which were shared among RPN and region detection layers. Herein, authors experiment with 2 backbone architectures - ResNet based VGG like in FRCNN and ResNet based [FPN](http://www.shortscience.org/paper?bibtexKey=journals/corr/LinDGHHB16) based. FPN uses convolution feature maps from previous layers and recombining them to produce pyramid of feature maps to be used for prediction instead of single-scale feature layer (final output of conv layer before connecting to fc layers was used in Faster RCNN)
**Training Objective**
The training objective looks like this

Lmask is the addition from Faster RCNN. The method to calculate was mentioned above
#### Observation
Mask RCNN performs significantly better than COCO instance segmentation winners *without any bells and whiskers*. Detailed results are available in the paper
When to use parametric models in reinforcement learning?
van Hasselt, Hado and Hessel, Matteo and Aslanides, John
[link] Summary by CodyWild 1 month ago
This paper is a bit provocative (especially in the light of the recent DeepMind MuZero paper), and poses some interesting questions about the value of model-based planning. I'm not sure I agree with the overall argument it's making, but I think the experience of reading it made me hone my intuitions around why and when model-based planning should be useful.
The overall argument of the paper is: rather than learning a dynamics model of the environment and then using that model to plan and learn a value/policy function from, we could instead just keep a large replay buffer of actual past transitions, and use that in lieu of model-sampled transitions to further update our reward estimators without having to acquire more actual experience. In this paper's framing, the central value of having a learned model is this ability to update our policy without needing more actual experience, and it argues that actual real transitions from the environment are more reliable and less likely to diverge than transitions from a learned parametric model. It basically sees a big buffer of transitions as an empirical environment model that it can sample from, in a roughly equivalent way to being able to sample transitions from a learnt model.
An obvious counter-argument to this is the value of models in being able to simulate particular arbitrary trajectories (for example, potential actions you could take from your current point, as is needed for Monte Carlo Tree Search). Simply keeping around a big stock of historical transitions doesn't serve the use case of being able to get a probable next state *for a particular transition*, both because we might not have that state in our data, and because we don't have any way, just given a replay buffer, of knowing that an available state comes after an action if we haven't seen that exact combination before. (And, even if we had, we'd have to have some indexing/lookup mechanism atop the data). I didn't feel like the paper's response to this was all that convincing. It basically just argues that planning with model transitions can theoretically diverge (though acknowledges it empirically often doesn't), and that it's dangerous to update off of "fictional" modeled transitions that aren't grounded in real data. While it's obviously definitionally true that model transitions are in some sense fictional, that's just the basic trade-off of how modeling works: some ability to extrapolate, but a realization that there's a risk you extrapolate poorly.
https://i.imgur.com/8jp22M3.png
The paper's empirical contribution to its argument was to argue that in a low-data setting, model-free RL (in the form of the "everything but the kitchen sink" Rainbow RL algorithm) with experience replay can outperform a model-based SimPLe system on Atari. This strikes me as fairly weak support for the paper's overall claim, especially since historically Atari has been difficult to learn good models of when they're learnt in actual-observation pixel space. Nonetheless, I think this push against the utility of model-based learning is a useful thing to consider if you do think models are useful, because it will help clarify the reasons why you think that's the case.
AI Safety Gridworlds
Jan Leike and Miljan Martic and Victoria Krakovna and Pedro A. Ortega and Tom Everitt and Andrew Lefrancq and Laurent Orseau and Shane Legg
Keywords: cs.LG, cs.AI
Abstract: We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.
[link] Summary by dniku 6 months ago
The paper proposes a standardized benchmark for a number of safety-related problems, and provides an implementation that can be used by other researchers. The problems fall in two categories: specification and robustness. Specification refers to cases where it is difficult to specify a reward function that encodes our intentions. Robustness means that agent's actions should be robust when facing various complexities of a real-world environment. Here is a list of problems:
1. Specification:
1. Safe interruptibility: agents should neither seek nor avoid interruption.
2. Avoiding side effects: agents should minimize effects unrelated to their main objective.
3. Absent supervisor: agents should not behave differently depending on presence of supervisor.
4. Reward gaming: agents should not try to exploit errors in reward function.
2. Robustness:
1. Self-modification: agents should behave well when environment allows self-modification.
2. Robustness to distributional shift: agents should behave robustly when test differs from train.
3. Robustness to adversaries: agents should detect and adapt to adversarial intentions in environment.
4. Safe exploration: agent should behave safely during learning as well.
It is worth noting that problems 1.2, 1.4, 2.2, and 2.4 have been described back in "Concrete Problems in AI Safety".
It is suggested that each of these problems be tackled in a "gridworld" environment — a 2D environment where the agent lives on a grid, and the only actions it has available are up/down/left/right movements. The benchmark consists of 10 environments, each corresponding to one of 8 problems mentioned above. Each of the environments is an extremely simple instance of the problem, but nevertheless they are of interest as current SotA algorithms usually don't solve the posed task.
Specifically, the authors trained A2C and Rainbow with DQN update on each of the environments and showed that both algorithms fail on all of specification problems, except for Rainbow on 1.1. This is expected, as neither of those algorithms are designed for cases where reward function is misspecified. Both algorithms failed on 2.2--2.4, except for A2C on 2.3. On 2.1, the authors swapped A2C for Rainbow with Sarsa update and showed that Rainbow DQN failed while Rainbow Sarsa performed well.
Overall, this is a good groundwork paper with only a few questionable design decisions, such as the design of actual reward in 1.2. It is unlikely to have impact similar to MNIST or ImageNet, but it should stimulate safety-related research.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Raffel, Colin and Shazeer, Noam and Roberts, Adam and Lee, Katherine and Narang, Sharan and Matena, Michael and Zhou, Yanqi and Li, Wei and Liu, Peter J.
[link] Summary by CodyWild 2 months ago
At a high level, this paper is a massive (34 pgs!) and highly-resourced study of many nuanced variations of language pretraining tasks, to see which of those variants produce models that transfer the best to new tasks. As a result, it doesn't lend itself *that* well to being summarized into a central kernel of understanding. So, I'm going to do my best to pull out some high-level insights, and recommend you read the paper in more depth if you're working particularly in language pretraining and want to get the details.
The goals here are simple: create a standardized task structure and a big dataset, so that you can use the same architecture across a wide range of objectives and subsequent transfer tasks, and thus actually compare tasks on equal footing. To that end, the authors created a huge dataset by scraping internet text, and filtering it according to a few common sense criteria. This is an important and laudable task, but not one with a ton of conceptual nuance to it.
https://i.imgur.com/5z6bN8d.png
A more interesting structural choice was to adopt a unified text to text framework for all of the tasks they might want their pretrained model to transfer to. This means that the input to the model is always a sequence of tokens, and so is the output. If the task is translation, the input sequence might be "translate english to german: build a bed" and the the desired output would be that sentence in German. This gets particularly interesting as a change when it comes to tasks where you're predicting relationships of words within sentences, and would typically have a categorical classification loss, which is changed here to predicting the word of the correct class. This restructuring doesn't seem to hurt performance, and has the nice side effect that you can directly use the same model as a transfer starting point for all tasks, without having to add additional layers. Some of the transfer tasks include: translation, sentiment analysis, summarization, grammatical checking of a sentence, and checking the logical relationship between claims.
All tested models followed a transformer (i.e. fully attentional) architecture. The authors tested performance along many different axes. A structural variation was the difference between an encoder-decoder architecture and a language model one.
https://i.imgur.com/x4AOkLz.png
In both cases, you take in text and predict text, but in an encoder-decoder, you have separate models that operate on the input and output, whereas in a language model, it's all seen as part of a single continuous sequence. They also tested variations in what pretraining objective is used. The most common is simple language modeling, where you predict words in a sentence given prior or surrounding ones, but, inspired by the success of BERT, they also tried a number of denoising objectives, where an original sentence was corrupted in some way (by dropping tokens and replacing them with either masks, nothing, or random tokens, dropping individual words vs contiguous spans of words) and then the model had to predict the actual original sentence.
https://i.imgur.com/b5Eowl0.png
Finally, they performed testing as to the effect of dataset size and number of training steps. Some interesting takeaways:
- In almost all tests, the encoder-decoder architecture, where you separately build representations of your input and output text, performs better than a language model structure. This is still generally (though not as consistently) true if you halve the number of parameters in the encoder-decoder, suggesting that there's some structural advantage there beyond just additional parameter count.
- A denoising, BERT-style objective works consistently better than a language modeling one. Within the set of different kinds of corruption, none work obviously and consistently better across tasks, though some have a particular advantage at a given task, and some are faster to train with due to different lengths of output text.
- Unsurprisingly, more data and bigger models both lead to better performance. Somewhat interestingly, training with less data but the same number of training iterations (such that you see the same data multiple times) seems to be fine up to a point. This potentially gestures at an ability to train over a dataset a higher number of times without being as worried about overfitting.
- Also somewhat unsurprisingly, training on a dataset that filters out HTML, random lorem-ipsum web text, and bad words performs meaningfully better than training on one that doesn't
Adding Gradient Noise Improves Learning for Very Deep Networks
Arvind Neelakantan and Luke Vilnis and Quoc V. Le and Ilya Sutskever and Lukasz Kaiser and Karol Kurach and James Martens
Keywords: stat.ML, cs.LG
Abstract: Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. This success is partially attributed to architectural innovations such as convolutional and long short-term memory networks. The main motivation for these architectural innovations is that they capture better domain knowledge, and importantly are easier to optimize than more basic architectures. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we discuss a low-overhead and easy-to-implement technique of adding gradient noise which we find to be surprisingly effective when training these very deep architectures. The technique not only helps to avoid overfitting, but also can result in lower training loss. This method alone allows a fully-connected 20-layer deep network to be trained with standard gradient descent, even starting from a poor initialization. We see consistent improvements for many complex models, including a 72% relative reduction in error rate over a carefully-tuned baseline on a challenging question-answering task, and a doubling of the number of accurate binary multiplication models learned across 7,000 random restarts. We encourage further application of this technique to additional complex modern architectures.
[link] Summary by David Stutz 6 months ago
Neelakantan et al. study gradient noise for improving neural network training. In particular, they add Gaussian noise to the gradients in each iteration:
$\tilde{\nabla}f = \nabla f + \mathcal{N}(0, \sigma^2)$
where the variance $\sigma^2$ is adapted throughout training as follows:
$\sigma^2 = \frac{\eta}{(1 + t)^\gamma}$
where $\eta$ and $\gamma$ are hyper-parameters and $t$ the current iteration. In experiments, the authors show that gradient noise has the potential to improve accuracy, especially given optimization.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
Towards a Neural Statistician
Harrison Edwards and Amos Storkey
Abstract: An efficient learner is one who reuses what they already know to tackle a new problem. For a machine learner, this means understanding the similarities amongst datasets. In order to do this, one must take seriously the idea of working with datasets, rather than datapoints, as the key objects to model. Towards this goal, we demonstrate an extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fashion. The network is trained to produce statistics that encapsulate a generative model for each dataset. Hence the network enables efficient learning from new datasets for both unsupervised and supervised tasks. We show that we are able to learn statistics that can be used for: clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classifying previously unseen classes.
[link] Summary by Hugo Larochelle 3 years ago
This paper can be thought as proposing a variational autoencoder applied to a form of meta-learning, i.e. where the input is not a single input but a dataset of inputs. For this, in addition to having to learn an approximate inference network over the latent variable $z_i$ for each input $x_i$ in an input dataset $D$, approximate inference is also learned over a latent variable $c$ that is global to the dataset $D$. By using Gaussian distributions for $z_i$ and $c$, the reparametrization trick can be used to train the variational autoencoder.
The generative model factorizes as
$p(D=(x_1,\dots,x_N), (z_1,\dots,z_N), c) = p(c) \prod_i p(z_i|c) p(x_i|z_i,c)$
and learning is based on the following variational posterior decomposition:
$q((z_1,\dots,z_N), c|D=(x_1,\dots,x_N)) = q(c|D) \prod_i q(z_i|x_i,c)$.
Moreover, latent variable $z_i$ is decomposed into multiple ($L$) layers $z_i = (z_{i,1}, \dots, z_{i,L})$. Each layer in the generative model is directly connected to the input. The layers are generated from $z_{i,L}$ to $z_{i,1}$, each layer being conditioned on the previous (see Figure 1 *Right* for the graphical model), with the approximate posterior following a similar decomposition.
The architecture for the approximate inference network $q(c|D)$ first maps all inputs $x_i\in D$ into a vector representation, then performs mean pooling of these representations to obtain a single vector, followed by a few more layers to produce the parameters of the Gaussian distribution over $c$.
Training is performed by stochastic gradient descent, over minibatches of datasets (i.e. multiple sets $D$).
The model has multiple applications, explored in the experiments. One is of summarizing a dataset $D$ into a smaller subset $S\in D$. This is done by initializing $S\leftarrow D$ and greedily removing elements of $S$, each time minimizing the KL divergence between $q(c|D)$ and $q(c|S)$ (see the experiments on a synthetic Spatial MNIST problem of section 5.3).
Another application is few-shot classification, where very few examples of a number of classes are given, and a new test example $x'$ must be assigned to one of these classes. Classification is performed by treating the small set of examples of each class $k$ as its own dataset $D_k$. Then, test example $x$ is classified into class $k$ for which the KL divergence between $q(c|x')$ and $q(c|D_k)$ is smallest. Positive results are reported when training on OMNIGLOT classes and testing on either the MNIST classes or unseen OMNIGLOT datasets, when compared to a 1-nearest neighbor classifier based on the raw input or on a representation learned by a regular autoencoder.
Finally, another application is that of generating new samples from an input dataset of examples. The approximate posterior is used to compute $q(c|D)$. Then, $c$ is assigned to its posterior mean, from which a value for the hidden layers $z$ and finally a sample $x$ can be generated. It is shown that this procedure produces convincing samples that are visually similar from those in the input set $D$.
**My two cents**
Another really nice example of deep learning applied to a form of meta-learning, i.e. learning a model that is trained to take *new* datasets as input and generalize even if confronted to datasets coming from an unseen data distribution. I'm particularly impressed by the many tasks explored successfully with the same approach: few-shot classification and generative sampling, as well as a form of summarization (though this last probably isn't really meta-learning). Overall, the approach is quite elegant and appealing.
The very simple, synthetic experiments of section 5.1 and 5.2 are also interesting. Section 5.2 presents the notion of a *prior-interpolation layer*, which is well motivated but seems to be used only in that section. I wonder how important it is, outside of the specific case of section 5.2.
Overall, very excited by this work, which further explores the theme of meta-learning in an interesting way.
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
Shiyu Liang and Yixuan Li and R. Srikant
Keywords: cs.LG, stat.ML
Abstract: We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection. We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR-10) when the true positive rate is 95%.
Liang et al. propose a perturbation-based approach for detecting out-of-distribution examples using a network's confidence predictions. In particular, the approaches based on the observation that neural network's make more confident predictions on images from the original data distribution, in-distribution examples, than on examples taken from a different distribution (i.e., a different dataset), out-distribution examples. This effect can further be amplified by using a temperature-scaled softmax, i.e.,
$ S_i(x, T) = \frac{\exp(f_i(x)/T)}{\sum_{j = 1}^N \exp(f_j(x)/T)}$
where $f_i(x)$ are the predicted logits and $T$ a temperature parameter. Based on these softmax scores, perturbations $\tilde{x}$ are computed using
$\tilde{x} = x - \epsilon \text{sign}(-\nabla_x \log S_{\hat{y}}(x;T))$
where $\hat{y}$ is the predicted label of $x$. This is similar to "one-step" adversarial examples; however, in contrast of minimizing the confidence of the true label, the confidence in the predicted label is maximized. This, applied to in-distribution and out-distribution examples is illustrated in Figure 1 and meant to emphasize the difference in confidence. Afterwards, in- and out-distribution examples can be distinguished using simple thresholding on the predicted confidence, as shown in various experiment, e.g., on Cifar10 and Cifar100.
https://i.imgur.com/OjDVZ0B.png
Figure 1: Illustration of the proposed perturbation to amplify the difference in confidence between in- and out-distribution examples.
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle and Michael Carbin
Keywords: cs.LG, cs.AI, cs.NE
First published: 2018/03/09 (1 year ago)
Abstract: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.
Frankle and Carbin discover so-called winning tickets, subset of weights of a neural network that are sufficient to obtain state-of-the-art accuracy. The lottery hypothesis states that dense networks contain subnetworks – the winning tickets – that can reach the same accuracy when trained in isolation, from scratch. The key insight is that these subnetworks seem to have received optimal initialization. Then, given a complex trained network for, e.g., Cifar, weights are pruned based on their absolute value – i.e., weights with small absolute value are pruned first. The remaining network is trained from scratch using the original initialization and reaches competitive performance using less than 10% of the original weights. As soon as the subnetwork is re-initialized, these results cannot be reproduced though. This suggests that these subnetworks obtained some sort of "optimal" initialization for learning.
ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
Sponsored by: and | CommonCrawl |
Drug treatment efficiency depends on the initial state of activation in nonlinear pathways
Victoria Doldán-Martelli1 &
David G. Míguez ORCID: orcid.org/0000-0001-8065-11422
718 Accesses
Biochemical reaction networks
Nonlinear phenomena
An accurate prediction of the outcome of a given drug treatment requires quantitative values for all parameters and concentrations involved as well as a detailed characterization of the network of interactions where the target molecule is embedded. Here, we present a high-throughput in silico screening of all potential networks of three interacting nodes to study the effect of the initial conditions of the network in the efficiency of drug inhibition. Our study shows that most network topologies can induce multiple dose-response curves, where the treatment has an enhanced, reduced or even no effect depending on the initial conditions. The type of dual response observed depends on how the potential bistable regimes interplay with the inhibition of one of the nodes inside a nonlinear pathway architecture. We propose that this dependence of the strength of the drug on the initial state of activation of the pathway may be affecting the outcome and the reproducibility of drug studies and clinical trials.
Some of the main potential contributions of Systems Biology to the field of Pharmacology are to help design better drugs1,2, to find better targets3 or to optimize treatment strategies4. To do that, a number of studies focus on the architecture of the biomolecular interaction networks that regulate signal transduction and how they introduce ultrasensitivity, desensitization, adaptation, spatial symmetry breaking and even oscillatory dynamics5,6. To identify the source of these effects, large scale signaling networks are often dissected into minimal sets of recurring interaction patterns called network motifs7. Many of these motifs are nonlinear, combining positive and negative feedback and feed-forward loops that introduce a rich variety of dynamic responses to a given stimulus.
In the context of protein-protein interaction networks, these loops of regulation are mainly based on interacting kinases and phosphatases. The strength of these interactions can be modulated by small molecules that can cross the plasma membrane8 and block the activity of a given kinase in a highly specific manner9. Inhibition of a dysfunctional component of a given pathway via small-molecule inhibition has been successfully used to treat several diseases, such as cancer or auto-immune disorders. Nowadays, 31 of these inhibitors are approved by the FDA, while many more are currently undergoing clinical trials10.
Characterization of inhibitors and its efficiency11 and specificity towards all human kinases constitutes a highly active area of research12,13,14. Importantly, since these inhibitors target interactions that are embedded in highly nonlinear biomolecular networks, the response to treatment is often influenced by the architecture of the network. For instance, treatment with the mTOR-inhibitor rapamycin results in reactivation of the Akt pathway due to the attenuation of the negative feedback regulation by mTORC115, also inducing a new steady state with high Akt phosphorylation16. In addition, the nonlinear interactions in the MEK/ERK pathway have been shown to induce different modes of response to inhibition17, and even bimodal MAP kinase (ERK) phosphorylation responses after inhibition in T-lymphocytes18. The same interplay between positive and negative feedbacks induces ERK activity pulses, with a frequency and amplitude that can be modulated by EGFR (epidermal growth factor receptor) and MEK (Mitogen-activated protein kinase kinase) inhibition, respectively19.
One of the basic characteristics that nonlinear interactions can induce in a system is multi-stability, commonly associated with the presence of direct or indirect positive feedback loops in the network. Multi-stability is characterized by the dependence of the final steady state of the system on the initial conditions, and it has been observed experimentally in vitro20,21, in vivo22,23, and in synthetic circuits24,25. In the context of biological networks and drug treatment, this dependence on initial conditions may result in differences in the effect of a given drug, depending on the initial state of the system.
Here, we investigate whether the efficiency of drug inhibition is affected by the initial conditions in the proteins of signaling pathways. To do that, we set a computational high-throughput screening to explore all possible networks of 3nodes and monitor their response to inhibition of one of the nodes. Each network has a topology that is represented by a system of three ODEs that describe a particular set of Michaelis-Menten interactions between input, target and output nodes. Starting from two different initial conditions, we generate two dose-response curves for each set of parameter values. The comparison of these two curves allows us to characterize each network topology in terms of its impact on the outcome of drug inhibition. Using this approach we found that, in most of the possible networks topologies, the initial state of the system determines the efficiency of a given drug, increasing, decreasing or even disrupting the efficiency of inhibition. We conclude that this dependence on the initial conditions may be compromising the reproducibility of in vitro and in vivo studies that involve inhibitory treatments.
The strength of inhibition depends on the initial conditions for most of the networks
At first inspection, our screening reports differences between the two dose-response curves for around 80% of all network topologies. This suggests that the efficiency of the inhibition depends on the initial conditions for most of the possible three-node network topologies, at least in a certain region of the parameter space. The percentage of networks where the two dose-response curves do not coincide increases with the connectivity of the network, as shown in Fig. 1 (blue bars and left vertical axis), up to 97% for networks with 8 links between input, target and output (251 of all possible 256 networks of 8 links in our study). The percentage of simulations that show multiple dose-response curves also increases with the number of links in the network (green bars and right vertical axis in Fig. 1) up to 5.5% for the more connected topologies.
General statistical analysis of the high-throughput screening. Bar plot showing the percentage of cases with multiple dose-response curves to inhibition increases with the network connectivity. Blue bars correspond to the percentage of network topologies (left vertical axis) and green bars correspond to the percentage of simulations (right vertical axis) that show multiple dose-response curves (each simulation corresponds to a particular combination of parameters). Values in each bar illustrate the number of positive cases over the total number of cases.
When comparing the two dose-response curves, we can identify different scenarios of how the initial conditions affect the efficiency of the drug treatment. The most common scenario corresponds to a shift in the dose-response curve, i.e., the initial condition affects the efficiency of the inhibitor. This behavior is characterized by a shift in the EC50 of the dose-response curve (i.e., the concentration of inhibitor that induces a half-maximal effect in the output). An example of this type of response is illustrated in Fig. 2a–d. The two dose-response curves are plotted in Fig. 2b, corresponding to each initial condition IClow and IChigh, in blue (DSlow) and red (DShigh), respectively. For this network configuration and these conditions, the EC50 of the inhibitor changes around 1.5 orders of magnitude. This type of dependence on the initial conditions is simply a result of a bistable regime, as shown in the phase plane in Fig. 2c (i.e., outside the bistable region, the final steady state does not depend on the initial condition whereas inside bistable regions, different initial conditions may lead to different steady states). Inside the bistable regime, the nullclines for the inhibitor concentration marked in Fig. 2b show two stable fixed points coexisting for the same conditions (blue and red solid circles) and the unstable fixed point (empty black circle). Figure 2d shows the bifurcation diagram with two stable branches that coexist for a particular range of values of inhibitor. Video S1 is an animation of how the nullclines and the steady states change with the concentration of inhibitor (black curves plot the trajectories of the initial conditions towards their corresponding steady state). This scenario can also occur in conditions where the inhibitor is acting as an activator of the output node, as illustrated in Supp. Figure 4b.
The effect of initial conditions can shape the dose-response curve in different ways. (a–d) Shift in the EC50, (e–h) insensitization of one of the dose-response curves, (i–l) switch in the effect of the drug. Panels (a, e, i) represent the network topologies for each mode. Pointed arrows represent positive interactions (activation) and blunt arrows represent negative interactions (de-activation). Panels (b, f, j) represent the dose-response curves DSlow (blue) and DShigh (red) for initial conditions IClow and IChigh, respectively. The rest of parameter values are the same between the two curves. Circles represent the steady state solutions of the system (blue and red solid circles correspond to SSlow and SShigh, respectively, and the empty black circle represents the unstable steady state) for a particular concentration of inhibitor (indicated by the vertical dashed lines in panels b, f, j). Panels (c, g, k) show the phase plane with vector field and nullclines for X3 (c, g, k) in blue and X2 (c, g) or X1 (k) in orange, representing the two stable steady states SSlow (blue) and SShigh (red), respectively. Panels (d, h, l) show the bifurcation diagram of X3. Black curves are the stable branches and the dashed red curve is the unstable branch.
Another common scenario corresponds to one of the dose-response curves showing a standard response to treatment, while the other is not responding for the same range of concentrations of inhibitor. An example of these dual two dose-response curves is shown in Fig. 2f for the network illustrated in Fig. 2e. In this scenario, the inhibitor acts as an activator of X3 when we start from IClow, but if the system starts from IChigh, it remains insensitive to changes in the concentration of inhibitor. Alternatively, different initial conditions can also reverse the effect of a given drug. For instance, the same treatment can result in inhibition or activation of the output signal, simply depending on the initial state of activation of input, target and output nodes. An example of this behavior is shown in Fig. 2i–l. The two dose-response curves in Fig. 2j for the network in Fig. 2i show one of the curves (DSlow) increasing when we increase the concentration of inhibitor, while the other (DShigh) decreases. The phase diagram (Fig. 2k) for intermediate values of the inhibitor shows two stable fixed points (filled red and blue circles), and an unstable fixed point (empty black circle). The bifurcation diagram (Fig. 2l) presents two stable branches, with the upper branch decreasing when the inhibitor is increased. This diagram shows that the increase in DSlow is caused by a transition from a bistable to a monostable regime with higher X3. This discontinuous jump in the dose-response curve is less pronounced for networks with higher connectivity, but we selected this example since its simplicity allows us to illustrate its nullclines in a two-dimensional phase plane, instead of a three-dimensional plot.
Different initial conditions can induce increased or decreased treatment efficiency
Among all motifs that induce multiple dose-response curves, we can further characterize the topologies in terms of the comparison between the two curves with respect to the two initial conditions. The most common scenario corresponds to the situation illustrated in Fig. 2b and Video S1, where the less sensitive curve (higher EC50) corresponds to the initial condition that results from applying a low concentration of inhibitor IClow, and the more sensitive curve occurs when the system starts from the initial condition that results from applying a high concentration of inhibitor IChigh. This increased sensitivity at intermediate concentrations of inhibitor occurs whether the treatment results in deactivation (as in Fig. 2b) or activation (as in Supp. Figure 4b) of the target. This situation occurs because, in the bistable regime, each initial condition IClow and IChigh evolves to its closest steady state in the phase space.
Several network topologies also exhibit a different scenario, characterized by an inversion in the sensitivity of the treatment between the two dose-response curves. This scenario is presented in Fig. 3b, and shows the red (DShigh) and blue curves (DSlow) swapped compared to Fig. 2b. For the inhibitor concentration indicated in Fig. 3b, the initial condition with lower X3 (red rhomb in Fig. 3c) evolves towards the steady state with higher X3 (red solid circle). On the other hand, the initial condition with higher X3 (blue rhomb) evolves to the steady state with lower X3 (blue solid circle). This is clearly shown by the trajectories (black dotted curves) corresponding to two simulations with the same exact parameter values, but starting from the two different initial conditions (IClow and IChigh).
The network architecture can induce inverse bistability. (a–d) Shift in the EC50. (e–h) Insensitization of one of the dose-response curves. Panels (a, e) represent examples of network topologies that show two different cases of inverse bistability. Pointed arrows represent positive interactions (activation) and blunt arrows represents negative interactions (de-activation). Panels (b, f) represent the dose-response curves DSlow (blue) and DShigh (red) for initial conditions IClow and IChigh, respectively. The rest of parameter values are the same for the two curves. Blue and red solid circles SSlow and SShigh represent the steady state solutions for a given concentration of inhibitor (vertical dashed line). Panels (c, g) represent the three-dimensional phase plane, with the trajectories of each simulation starting from each of the two initial conditions (red and blue rhombs), and the separatrix between the two basins of attraction (red surface). Panels (d, h) show the bifurcation diagram of X3. Black curves are the stable branches and the red dashed curve is the unstable branch.
In terms of the effect of the drug, the initial condition that results from applying a high concentration of inhibitor (IChigh) shows a reduced response to the drug, compared to the initial condition that results from a low concentration of inhibitor (IClow). In other words, the EC50 of DShigh is now higher than DSlow, as shown in Fig. 3b. This contrasts with the scenario of Fig. 2b and Video S1, where the EC50 of the drug is lower for DShigh compared to DSlow. To understand this behavior, we plot the three-dimensional separatrix between the two basins of attraction of the bistable regime in Fig. 3c. Since the separatrix divides the phase space vertically, the system is forced to perform a long path in X3 concentration towards the steady state in its basin of attraction. This is translated into a shift in the dose-response curves in the bistable regime, and therefore, an increase in the EC50 when the system is initially inhibited.
Since now, each initial condition IClow and IChigh does not transit to its closest steady state, but instead, it evolves to the steady state that is further away in X3 concentration. We will refer to this scenario as inverse bistability. Video S2 is an animation of how the two initial conditions transit to their corresponding steady state for increasing concentrations of the inhibitor. This inversion of the bistable solutions can also occur in conditions where the inhibitor is acting as an activator of the output node, as illustrated in Supp. Figure 5b.
Analog to the situation of Fig. 2e–h where the dose-response curve (DShigh) becomes insensitive to the drug, other topologies present the opposite scenario, i.e., the DShigh responds to the drug but the DSlow is insensitive. This scenario is illustrated in Fig. 3f for the network topology of Fig. 3e. Here, DSlow responds by reducing X3 activation in less than 10%, while now a high initial concentration of inhibitor sensitizes the system, i.e., the dose-response curve (DShigh) shows a much stronger inhibition of the output. Figure 3g plots the three-dimensional phase space for a particular inhibitor concentration in the bistable regime. Again, the separatrix divides the space in such a way that the initial IChigh evolves to the steady state with higher X3 and the switch in DShigh (red line in Fig. 3f). Figure 3h shows that the two branches are stable for all concentrations of inhibitor tested. Despite this, the system is able to switch from one solution to the other because the separatrix moves relatively to the fixed initial conditions. Video S3 corresponds to an animation of this scenario. Please note that, depending on the parameter values, the same topology can exhibit different responses (for instance, the same network is used for Figs 2a and 3e to generate normal and inverse irreversible bistability). The transition between different regimes depending on the parameter values is analyzed in the next sections.
The network architecture can induce inverse hysteresis loops
As discussed above, inverse bistability occurs due to the interplay in phase space between the initial conditions and the basins of attraction of the two final stable steady states. Nonetheless, our screening revealed another family of topologies that shows an equivalent scenario of inverse bistability, but with additional features. An illustrative example of this behavior is shown in Fig. 4. The first example corresponds to a network topology of four links that shows inverse bistability as defined in the previous section, i.e, two dose-response curves where the DShigh has a higher EC50 than DSlow. Since X2 does not receive input from X1 and X3, the phase space is plotted in two dimensions to show the nullclines and the vector field (Fig. 4c). Interestingly, the bifurcation diagram in Fig. 4d shows a more complex configuration than in Fig. 3d, with the two stable branches now extending from low to high X3. This configuration induces another interesting property to these types of networks: Inverse bistability does not only occur when we start with fixed initial conditions, but also if the concentration of the inhibitor is gradually increased or decreased from each initial condition. In other words, if the concentration of inhibitor is progressively increased or decreased, the system follows a hysteresis loop that is reversed compared to the standard hysteresis observed in magnetism, optical and other physical systems. To illustrate that, we developed an animation where the concentration of inhibitor is gradually increased and then decreased, and the evolution of steady states forms an inverse hysteresis loop (Video S4).
The network architecture can induce inverse hysteresis. (a) Example of a network architecture that induces inverse hysteresis. Pointed arrows represent positive interactions (activation) and blunt arrows represent negative interactions (de-activation). (b) Dose-response curves DSlow (blue) and DShigh red for initial conditions IClow and IChigh, respectively. The rest of the parameter values are the same between the two curves. Blue and red solid circles represent the two steady state solutions for a given concentration of inhibitor (SSlow and SShigh). (c) Phase plane with vector field and nullclines for X3 (blue) and X1 (orange). The black empty circle shows the unstable steady state. (d) Bifurcation diagram of X3. Black curves are the stable branches and the red dashed curve is the unstable branch, for the inhibitor concentration indicated in panel b. (e) Box plot for all parameter sets that show standard and inverse hysteresis. Blue, green and red background represents the saturated, unconstrained and linear regimes of the Michaelis-Menten kinetics, respectively. (f) Changes in the dose-response curves when two parameters are varied from standard to inverse hysteresis conditions.
To understand the interactions that induce this inverse hysteresis response, we compared (Fig. 4e) 100 different sets of parameters in a box plot where this topology produces standard (orange) and inverse bistability (blue). This plot allows us to see that most values show overlapping distributions for both types of bistability, while two of them are clearly separated (K2,3 and k1,3 for this particular network). Next, dose-response curves are generated by changing these two parameters between their average values that produce standard or inverse bistability (the rest of parameters are fixed and correspond to the average of the mean for both orange and blue distributions). This analysis reveals that K2,3 mainly affects the response of X3 in the range of low inhibition, k1,3 mainly affects the steady state in the range of high inhibition, while the intermediate bistable region remains almost unchanged. When both are simultaneously varied (Fig. 4f), we clearly observe that these changes in low and high range of inhibitor interplay to change the nature of the drug from inhibitor to activator of the node X3.
This sequence also illustrates that, in some particular topologies, standard bistability can be converted to inverse bistability by manipulating some key interactions that reverse the effect of the inhibitor while maintaining the bistable region at intermediate inhibitor concentrations. To do that in this particular topology, the strength of the interaction between X1 and X3 is reduced, while the interaction between X2 and X3 goes from a linear to an unconstrained regime. A different topology with a similar transition from standard to inverse bistability is shown in Supp. Figure 6 an additional example of a network topology able to produce inverse bistability and inverse hysteresis is shown in Supp. Figure 8.
Overall characterization of the topologies reveals the minimal motifs that exhibit inverse bistability
To characterize the basic ingredients underlying the inverse bistability, we proceed to analyze all potential topologies that exhibit this behavior and find relationships and similarities between them. When grouped by number of links, we observe that the percentage of networks that exhibit inverse bistability increases with the connectivity of the network (yellow columns in Fig. 5a); this also happens for the percentage of simulations (one simulation corresponds to one combination of parameters) showing inverse bistability (see Supp. Figure 3). Figure 5b represents all topologies that show inverse bistability as an atlas that correlates topologies by their architecture by identifying topologies that contain another topology of lower connectivity. This representation reveals 19 minimal motifs of 4 links that are contained in most of the higher connected topologies (the number of links in a given motif corresponds to the number of nonzero entries of the first 3 rows of the interaction matrix). These 19 topologies are represented as 3 × 3 matrix plots (which correspond to the three first rows of the interaction matrix) as follows: white is "1" (activation), black is "−1" (deactivation) and grey is "0" (no interaction)). The 4-link topologies that can also induce an inverse hysteresis loop are highlighted in red. The row below groups these 19 topologies in sets that only differ by two interactions (a given topology can result in different modes of response).
Characterization of networks that show inverse bistability and inverse hysteresis. (a) The percentage of topologies that show inverse bistability increases with the network connectivity. Blue bars correspond to the percentage of topologies with the same dose-response curve for both initial conditions; green and yellow bars correspond to the percentage of topologies that show an increase or decrease of the EC50, respectively. (b) Atlas for all network topologies that induce inverse bistability. Circles represent each of the topologies where our screening has shown inverse bistability. Networks of different connectivity are represented in different colors. Gray lines link topologies that contain another topology of lower connectivity. Networks of lower connectivity are represented as matrixplots for the interactions, where white represents activation, black is deactivation and grey means no interaction. These minimal networks are then grouped in families where just one or two interactions change (marked with diagonal lines). Matrix plots highlighted in red correspond to topologies that can also produce inverse hysteresis loops. The topology corresponding to each matrixplot is shown below (interactions that vary in sign or in terms of presence/absence inside a family of minimal network topologies are represented with dashed lines).
All 19 minimal topologies combine positive and negative interactions (i.e, no networks where all interactions are positive or negative). In addition, all of them contain at least a positive feedback that can be direct or indirect (i.e, the self-activation of a node involves another node of the network). The negative interaction can take the form of an indirect feedback loop (as in Supp. Figure 8a), an incoherent feed-forward loop, or not be part of a loop at all (as in Fig. 4a). We have found topologies where the interactions modulated by the inhibitor can either influence a positive, a negative feedback, a feed-forward loop, or even several of them simultaneously. We suggest that inverse bistability results from the interplay between the positive feedback (that generates the bistability) and the negative interactions that shape the basins of attraction. Additionally, the inhibitor has to directly or indirectly affect the positive feedback and induce a change between the two stable states at a given concentration.
In this paper, we present the first global analysis to study how the network topology influences drug treatment. To do that, we focus on small networks of three interacting nodes where one of the nodes is the target of a small molecule inhibitor. We compare dose-response curves of the same treatment starting from two different initial conditions. Our analysis reveals that the initial conditions affect the efficiency of the treatment in most network topologies of three nodes. This dependence arises from the nonlinear characteristics of the network topology, and it is translated into modifications in the dose-response curves and changes in the EC50 as well as in the overall effect of the inhibitor. Moreover, we found network configurations that show a novel behavior characterized by the inversion of the steady states with respect to the initial conditions. In some conditions, this "inverse bistability" can also result in "inverse hysteresis loops", where the reduction of the efficiency of the treatment also occurs when the concentration of inhibitor is varied gradually. To our knowledge, this is the first evidence of this type of responses. Finally, our study shows that most of the topologies that show this inverse bistability and hysteresis contain core motifs of four links composed by a positive feedback and a negative regulation.
The workflow of our high-throughput screening is an in silico simulation of the experimental workflow used to determine dose-response curves. The fact that all the points in a dose-response curve start from the same initial state interplays with the bistable regions generated by a given network topology, resulting in a complex scenario where the relationship between the initial states and the basins of attraction in the phase space induces reversible or irreversible inverse bistability, and even inverse hysteresis loops.
When comparing the dose-response curves in standard versus inverse bistability, DShigh has a higher sensitivity (reduced EC50) than DSlow in standard bistability, while DShigh has a lower sensitivity (increased EC50) than DSlow in conditions of inverse bistability. This reduction in sensitivity is very different from the well-studied homologous or heterologous desensitization after repeated or prolonged receptor stimulation26,27. Receptor desensitization is achieved mainly by a single negative feedback loop that reduces the number or the efficiency of receptors on the cell surface after an initial stimulation28,29. This is translated into an initial strong transient activation of the targets downstream, while a second application of the stimulus does not show the same transient activation. While receptor desensitization focuses on transient responses, the reduced sensitivity resulting from the inverse bistability refers to true final steady states of the network.
Our study is limited to topologies of three main nodes that play different roles in the network, in an attempt to identify the minimal motifs that induce these dual dose-response curves. In principle, our results also apply to more larger networks with increasing number of nodes that interact linearly, since linear protein-protein interactions can be reduced to smaller networks with equivalent dynamics without reducing the spectrum of reported behaviors3,30,31. We also expect that larger more complex biological networks that contain any of the smaller motifs reported in our analysis (Fig. 5) will exhibit a similar or even stronger dependence on initial conditions (see for instance32), since our analysis shows that the percentage of the networks with multiple dose-response curves increases with the connectivity of the network. Our lab is currently working in the experimental validation and characterization of signaling pathways with dual response-curves when targeted by an inhibitor (manuscript in preparation).
The characterization of the effect of a drug starts with an accurate and reproducible in vitro or in vivo dose-response curve to establish the optimal dose or the optimal schedule or treatment. The fact that, for most topologies, different initial conditions give different dose-response curves may compromise the reproducibility of drug treatment between biological samples or even patients. In conclusion, when designing drugs and treatments that target proteins embedded in highly inter-connected networks such as signal regulatory pathways, the efficiency of a given compound cannot be predicted if the state of activation of the network is unknown.
To study all potential network topologies that induce multiple dose-response curves, we set up a high-throughput approach that explores all possible network topologies, or connections between an input, a target and an output node, including positive and negative feedback auto-regulation (see Fig. 6a). This computational screening is inspired by previous studies that focus on network topologies inducing adaptation33, bistability and ultrasensitivity34 and spatial pattern formation35. Our approach introduces the effect of a drug inhibitor in one of the nodes of the network and focuses mainly on the characterization of the effect of the network in shaping the response to the inhibition. To do that, we generate and compare dose-response curves for a given topology and set of parameters, but starting from different initial conditions.
Scheme of the workflow for the high-throughput screening. (A) Scheme of the core network with all possible interactions between input, target, and output. (B) For each possible interaction matrix (5103 possible topologies), the phase space is sampled by randomly generating 10000 sets of parameter values for the catalytic (k), Michaelis-Menten (K) matrices. For each of these parameter sets, the three differential equations for input, target and output are solved numerically for two different inhibitor concentrations ([inh]low = 0 nM and [inh]high = 103 nM). The resulting steady states are used as initial conditions (IClow and IChigh) for numerical simulations applying a range of inhibitor concentrations. The steady state value of the output node (X3) is plotted against the inhibitor concentration to generate dose-response curves DSlow and DShigh. Finally, both dose-response curves are compared.
Our core network is composed of three main components: an input node that receives a constant external stimulus, a target node that is inhibited by the drug, and the output node, which is used as a readout of the system activity. Details of the dynamics of the interaction between the nodes and automatization of the screening are described in the Supp. Material. In brief, the set of interactions can be generalized in the following equation:
$$\frac{\partial {X}_{j}}{\partial t}=\sum _{i=1}^{9}({\delta }_{({I}_{i,j})(1)}\frac{(1-{X}_{j})\cdot {X}_{i}\cdot {k}_{i,j}}{{K}_{i,j}+1-{X}_{j}}-\,{\delta }_{({I}_{i,j})(-1)}\frac{{X}_{i}\cdot {X}_{j}\cdot {k}_{i,j}}{{K}_{i,j}+{X}_{j}})$$
where X is the state vector that contains the concentration of the active version of the input X1, target X2 and output X3, as well as the value of the background activator (X4, X5, X6), and a deactivator (X7, X8, X9) enzymes for each of them. These background interactions are incorporated to ensure that each node receives at least one activating and one deactivating interaction (see Supp. Material). Therefore, independently of the number of links, the system of equations for each topology only contains three equations (the concentrations X4, …, X9 are constants (parameters) rather than variables). Ii,j represents the components of the interaction matrices for all 5103 possible networks of interactions between input, target and output in our study (see Supp. Material). Here, a given component Ii,j of the matrix is zero if Xi does not affect Xj, 1 if the Xi activates Xj and −1 if Xi deactivates Xj. \({\delta }_{({I}_{i,j})(1)}\) and \({\delta }_{({I}_{i,j})(-1)}\) are Kronecker delta functions that are 1 when the value Ii,j is 1 or −1, respectively. This way, the left part of the sum is nonzero when the component Xi activates Xj, while the right side is nonzero when Xi deactivates Xj.
Parameters ki,j and Ki,j correspond to the catalytic and Michaelis-Menten constants for the activation or deactivation of Xj by Xi. The effect of inhibitor is incorporated as a sigmoidal function, assuming fast dynamics of binding and unbinding to its target (quasi-steady state approximation) and that the inhibitor is in excess over the enzyme X2 (see Supp. Material). Considering this, X2 is substituted by the expression:
$${X}_{2}\Rightarrow \frac{{X}_{2}}{1+({K}_{a}\times inh)}$$
when i = 2 in Eq. 1 (i.e., whenever X2 acts as an activating or deactivating enzyme). This way, the effect of the inhibitor can be directly incorporated into the equations independently of the architecture of the network, strongly facilitating the screening process. We fix the value of Ka of our inhibitor as 107 1/M (see Supp. Material), and assume that it is in excess over the enzyme X2. This approach excludes all topologies from our screening where X2 regulates itself (see Supp. Material).
The workflow can be described as follows (see Fig. 6b): For each particular network topology, all components ki,j and Ki,j of the catalytic and Michaelis-Menten constant matrices are randomly set from a desired range of values (see Supp. Material). Then, the system is numerically solved for two different constant concentrations of inhibitor ([inh]low = 0 nM and [inh]high = 103 nM), and the resulting steady state values of input, target and output are then used as initial conditions for new numerical simulations. Next, different concentrations of inhibitor are applied to the networks starting from these two initial conditions. The steady state of the output X3 is used to draw the corresponding dose-response curve. Finally, the two dose-response curves are analyzed, compared and classified (see Supp. Material for a more detailed explanation). This is repeated 10000 times for each topology, with different catalytic and Michaelis-Menten constant matrices, to sample the parameters space and identify regions where the dose-response curve depends on the initial conditions. Based on this, all network topologies are classified depending on the relationship between the two dose-response curves. This way, if both curves DSlow and DShigh are identical, the response of the inhibitor does not depend on the initial conditions, while if the two curves are different, this means that the effect of the inhibitor is dependent on the initial state of activation of the system.
This workflow is designed to mimic the typical experimental methodology to determine dose-response curves: Starting from multiple equivalent samples in the same experimental condition, different concentrations of the drug are administered to each of the samples, and the final steady state of the readout is plotted against the concentration of the drug. This is different from the typical studies of bistability in physical and chemical systems20,21, where an input parameter is gradually increased or decreased (i.e., the initial condition for each point in the curve corresponds to the steady state of the previous point in the analysis).
Doldán-Martelli, V., Guantes, R. & Míguez, D. G. A mathematical model for the rational design of chimeric ligands in selective drug therapies. CPT: pharmacometrics & systems pharmacology 2, e26 (2013).
Ruiz-Herrero, T., Estrada, J., Guantes, R. & Miguez, D. G. A Tunable Coarse-Grained Model for Ligand-Receptor Interaction. PLoS Computational Biology 9 (2013).
Míguez, D. G. Network nonlinearities in drug treatment. Interdisciplinary Sciences: Computational Life Sciences 5, 85–94 (2013).
Doldán-Martelli, V. & Míguez, D. G. Synergistic interaction between selective drugs in cell populations models. PloS one 10, e0117558 (2015).
Article PubMed PubMed Central CAS Google Scholar
Tyson, J. J., Chen, K. C. & Novak, B. Sniffers, buzzers, toggles and blinkers: dynamics of regulatory and signaling pathways in the cell. Current Opinion in Cell Biology 15, 221–231 (2003).
Article PubMed CAS Google Scholar
Ferrell, J. E. et al. Simple, realistic models of complex biological processes: Positive feedback and bistability in a cell fate switch and a cell cycle oscillator. FEBS Letters 583, 3999–4005 (2009).
Milo, R. et al. Network Motifs: Simple Building Blocks of Complex Networks. Science 298, 824–827 (2002).
ADS Article PubMed CAS Google Scholar
Arkin, M. R. & Wells, Ja Small-molecule inhibitors of protein-protein interactions: progressing towards the dream. Nature reviews. Drug discovery 3, 301–317 (2004).
Arkin, M. R., Tang, Y. & Wells, J. A. Small-molecule inhibitors of protein-protein interactions: Progressing toward the reality. Chemistry and Biology 21, 1102–1114 (2014).
Wu, P., Nielsen, T. E. & Clausen, M. H. Small-molecule kinase inhibitors: An analysis of FDA-approved drugs. Drug Discovery Today 21, 5–10 (2016).
Holmgren, E. B. Theory of drug development, 1st edn (Chapman & Hall/CRC, 2013).
Fabian, M. A. et al. A small molecule-kinase interaction map for clinical kinase inhibitors. Nature biotechnology 23, 329–36 (2005).
Perlman, Z. E. et al. Multidimensional drug profiling by automated microscopy. Science (New York, N.Y.) 306, 1194–8 (2004).
ADS Article CAS Google Scholar
Karaman, M. W. et al. A quantitative analysis of kinase inhibitor selectivity. Nature Biotechnology 26, 127–132 (2008).
Wan, X., Harkavy, B., Shen, N., Grohar, P. & Helman, L. J. Rapamycin induces feedback activation of Akt signaling through an IGF-1R-dependent mechanism. Oncogene 26, 1932–1940 (2007).
Rodrik-Outmezguine, V. S. et al. mTOR kinase inhibition causes feedback-dependent biphasic regulation of AKT signaling. Cancer discovery 1, 248–59 (2011).
Vogel, R. M., Erez, A. & Altan-Bonnet, G. Dichotomy of cellular inhibition by small-molecule inhibitors revealed by single-cell analysis. Nature communications 7, 12428 (2016).
ADS Article PubMed PubMed Central CAS Google Scholar
Altan-Bonnet, G., Germain, R. N., Germain, R., Oltz, E. & Stewart, V. Modeling T Cell Antigen Discrimination Based on Feedback Control of Digital ERK Responses. PLoS Biology 3, e356 (2005).
Albeck, J., Mills, G. & Brugge, J. Frequency-Modulated Pulses of ERK Activity Transmit Quantitative Proliferation Signals. Molecular Cell 49, 249–261 (2013).
Vanag, V. K., Míguez, D. G. & Epstein, I. R. Designing an enzymatic oscillator: bistability and feedback controlled oscillations with glucose oxidase in a continuous flow stirred tank reactor. The Journal of chemical physics 125, 194515 (2006).
Míguez, D. G., Vanag, V. K. & Epstein, I. R. Fronts and pulses in an enzymatic reaction catalyzed by glucose oxidase. Proceedings of the National Academy of Sciences of the United States of America 104, 6992–7 (2007).
Elf, J., Nilsson, K., Tenson, T. & Ehrenberg, M. Bistable Bacterial Growth Rate in Response to Antibiotics with Low Membrane Permeability. Physical Review Letters 97, 258104 (2006).
Karslake, J., Maltas, J., Brumm, P. & Wood, K. B. Population Density Modulates Drug Inhibition and Gives Rise to Potential Bistability of Treatment Outcomes for Bacterial Infections. PLOS Computational Biology 12, e1005098 (2016).
Collins, J. J., Gardner, T. S. & Cantor, C. R. Construction of a genetic toggle switch in Escherichia coli. Nature 403, 339–342 (2000).
Burrill, D. R., Inniss, M. C., Boyle, P. M. & Silver, P. A. Synthetic memory circuits for tracking human cell fate. Genes & development 26, 1486–97 (2012).
Fehmann, H. C., Habener, J. F. & Fehmann, H. C. Homologous desensitization of the insulinotropic glucagon-like peptide-i(7–37) receptor on insulinoma (hit-t15) cells. Endocrinology 128, 2880–2888 (1991).
Sun, Y. et al. Mechanism of glutamate receptor desensitization. Nature 417, 245–253 (2002).
Freedman, N. J. & Lefkowitz, R. J. Desensitization of G protein-coupled receptors. Recent progress in hormone research 51, 319–51; discussion 352–3 (1996).
Gainetdinov, R. R., Premont, R. T., Bohn, L. M., Lefkowitz, R. J. & Caron, M. G. Desensitization of G protein Coupled Receptors and neuronal Functions. Annu. Rev. Neurosci 27, 107–44 (2004).
Alon, U. Network motifs: theory and experimental approaches. Nature Reviews Genetics 8, 450–461 (2007).
Wolf, D. M. & Arkin, A. P. Motifs, modules and games in bacteria. Current Opinion in Microbiology 6, 125–134 (2003).
Straube, R. & Conradi, C. Reciprocal enzyme regulation as a source of bistability in covalent modification cycles. Journal of Theoretical Biology 330, 56–74 (2013).
Article PubMed MATH CAS Google Scholar
Ma, W., Trusina, A., El-Samad, H., Lim, W. A. & Tang, C. Defining network topologies that can achieve biochemical adaptation. Cell 138, 760–73 (2009).
Shah, N. A. & Sarkar, C. A. Robust Network Topologies for Generating Switch-Like Cellular Responses. PLoS Computational Biology 7, e1002085 (2011).
ADS MathSciNet Article PubMed PubMed Central CAS Google Scholar
Cotterell, J. & Sharpe, J. An atlas of gene regulatory networks reveals multiple three-gene mechanisms for interpreting morphogen gradients. Molecular Systems Biology 6, 425 (2010).
This work has been supported by the Ministry of Science and Technology of Spain via a Ramón y Cajal Fellowship (Ref. RYC-2010-07450), a grant from Plan Nacional framework (Ref. BFU2011-30303 and & BFU2014-53299-P) and a FPU fellowship. We thank Raúl Guantes, Juan Díaz Colunga, Marta Ibañes, Rosa Martínez Corral, Saúl Ares and Katherine Gonzales for invaluable help and technical assistance.
Facultad de Matemáticas, Universidad Carlos III, 28911, Leganés, Madrid, Spain
Victoria Doldán-Martelli
Centro de Biología Molecular Severo Ochoa, Depto. de Física de la Materia Condensada, Instituto Nicolás Cabrera and IFIMAC, Universidad Autónoma de Madrid, Campus de Cantoblanco, 28046, Madrid, Spain
David G. Míguez
D.G.M. and V.D.M.: Designed research, performed research, wrote the manuscript. All authors reviewed the manuscript.
Correspondence to David G. Míguez.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Video 1
Matlab scripts
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Doldán-Martelli, V., Míguez, D.G. Drug treatment efficiency depends on the initial state of activation in nonlinear pathways. Sci Rep 8, 12495 (2018). https://doi.org/10.1038/s41598-018-30913-9 | CommonCrawl |
\begin{definition}[Definition:Perfect Power]
A '''perfect power''' is an integer which can be expressed in the form:
:$a^k$
where both $a$ and $k$ are integers with $a \ge 1$ and $k \ge 2$.
\end{definition} | ProofWiki |
Generator (mathematics)
In mathematics and physics, the term generator or generating set may refer to any of a number of related concepts. The underlying concept in each case is that of a smaller set of objects, together with a set of operations that can be applied to it, that result in the creation of a larger collection of objects, called the generated set. The larger set is then said to be generated by the smaller set. It is commonly the case that the generating set has a simpler set of properties than the generated set, thus making it easier to discuss and examine. It is usually the case that properties of the generating set are in some way preserved by the act of generation; likewise, the properties of the generated set are often reflected in the generating set.
List of generators
A list of examples of generating sets follow.
• Generating set or spanning set of a vector space: a set that spans the vector space
• Generating set of a group: A subset of a group that is not contained in any subgroup of the group other than the entire group
• Generating set of a ring: A subset S of a ring A generates A if the only subring of A containing S is A
• Generating set of an ideal in a ring
• Generating set of a module
• A generator, in category theory, is an object that can be used to distinguish morphisms
• In topology, a collection of sets that generate the topology is called a subbase
• Generating set of a topological algebra: S is a generating set of a topological algebra A if the smallest closed subalgebra of A containing S is A
Differential equations
In the study of differential equations, and commonly those occurring in physics, one has the idea of a set of infinitesimal displacements that can be extended to obtain a manifold, or at least, a local part of it, by means of integration. The general concept is of using the exponential map to take the vectors in the tangent space and extend them, as geodesics, to an open set surrounding the tangent point. In this case, it is not unusual to call the elements of the tangent space the generators of the manifold. When the manifold possesses some sort of symmetry, there is also the related notion of a charge or current, which is sometimes also called the generator, although, strictly speaking, charges are not elements of the tangent space.
• Elements of the Lie algebra to a Lie group are sometimes referred to as "generators of the group," especially by physicists.[1] The Lie algebra can be thought of as the infinitesimal vectors generating the group, at least locally, by means of the exponential map, but the Lie algebra does not form a generating set in the strict sense.[2]
• In stochastic analysis, an Itō diffusion or more general Itō process has an infinitesimal generator.
• The generator of any continuous symmetry implied by Noether's theorem, the generators of a Lie group being a special case. In this case, a generator is sometimes called a charge or Noether charge, examples include:
• angular momentum as the generator of rotations,[3]
• linear momentum as the generator of translations,[3]
• electric charge being the generator of the U(1) symmetry group of electromagnetism,
• the color charges of quarks are the generators of the SU(3) color symmetry in quantum chromodynamics,
• More precisely, "charge" should apply only to the root system of a Lie group.
See also
• Generating function
• Lie theory
• Symmetry (physics)
• Particle physics
• Supersymmetry
• Gauge theory
• Field (physics)
References
1. McMahon, D. (2008). Quantum Field Theory. Mc Graw Hill. ISBN 978-0-07-154382-8.
2. Parker, C.B. (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). Mc Graw Hill. ISBN 0-07-051400-3.
3. Abers, E. (2004). Quantum Mechanics. Addison Wesley. ISBN 978-0-131-461000.
External links
• Generating Sets, K. Conrad
| Wikipedia |
Pressures for asymptotically sub-additive potentials under a mistake function
On the relations between positive Lyapunov exponents, positive entropy, and sensitivity for interval maps
February 2012, 32(2): 467-485. doi: 10.3934/dcds.2012.32.467
Asymptotic estimates for unimodular Fourier multipliers on modulation spaces
Jiecheng Chen 1, , Dashan Fan 2, and Lijing Sun 2,
Department of Mathematics, Zhejiang Normal University, 321004 Jinhua, China
Department of Mathematical Sciences, University of Wisconsin-Milwaukee, Milwaukee, WI 53201, United States, United States
Received August 2010 Revised June 2011 Published September 2011
Recently, it has been shown that the unimodular Fourier multipliers $e^{it|\Delta |^{\frac{\alpha }{2}}}$ are bounded on all modulation spaces. In this paper, using the almost orthogonality of projections and some techniques on oscillating integrals, we obtain asymptotic estimates for the unimodular Fourier multipliers $e^{it|\Delta |^{\frac{\alpha }{2}}}$ on the modulation spaces. As applications, we give the grow-up rates of the solutions for the Cauchy problems for the free Schrödinger equation, the wave equation and the Airy equation with the initial data in a modulation space. We also obtain a quantitative form about the solution to the Cauchy problem of the nonlinear dispersive equations.
Keywords: Airy equation., Unimodular multipliers, Schrödinger equation, modulation spaces, wave equation.
Mathematics Subject Classification: Primary: 42B15, 42B35; Secondary: 42C1.
Citation: Jiecheng Chen, Dashan Fan, Lijing Sun. Asymptotic estimates for unimodular Fourier multipliers on modulation spaces. Discrete & Continuous Dynamical Systems - A, 2012, 32 (2) : 467-485. doi: 10.3934/dcds.2012.32.467
A. Bényi, K. Gröchenig, K. A. Okoudjou and L. G. Rogers, Unimodular Fourier multipliers for modulation spaces,, J. Func. Anal., 246 (2007), 366. doi: 10.1016/j.jfa.2006.12.019. Google Scholar
A. Bényi and K. A. Okoudjou, Local well-posedness of nonlinear dispersive equations on modulation spaces,, Bull. Lond. Math. Soc., 41 (2009), 549. doi: 10.1112/blms/bdp027. Google Scholar
J. Bergh and J. Löfström, "Interpolation Spaces. An Introduction,", Grundlehren der Mathematischen Wissenschaften, (1976). Google Scholar
E. Cordero and F. Nicola, Some new Strichartz estimates for the Schrödinger equation,, J. Differential Equations, 245 (2008), 1945. doi: 10.1016/j.jde.2008.07.009. Google Scholar
Y. Domar, On the spectral synthesis problem for $(n-1)$-dimensional subset of $ \mathbbR ^n,$ $n\geq 2,$, Ark Math, 9 (1971), 23. doi: 10.1007/BF02383635. Google Scholar
H. G. Feichtinger, Modulation spaces on locally compact abelian groups, Technical Report,, University of Vienna, (1983), 99. Google Scholar
H. G. Feichtinger, Modulation spaces: Looking back and ahead,, Sampl Theory Signal Image Process, 5 (2006), 109. Google Scholar
K. Gröchening, "Foundations of Time-Frequency Analysis,", Applied and Numerical Harmonic Analysis, (2001). Google Scholar
L. Hörmander, Estimates for translation invariant operators in $L^p$ spaces,, Acta Math, 104 (1960), 93. doi: 10.1007/BF02547187. Google Scholar
W. Littman, Fourier transforms of surface-carried measures and differentiability of surface averages,, Bull. Amer. Math. Soc., 69 (1963), 766. doi: 10.1090/S0002-9904-1963-11025-3. Google Scholar
A. Miyachi, F. Nicola, S. Rivetti, A. Taracco and N. Tomita, Estimates for unimodular Fourier multipliers on modulation spaces,, Proc. Amer. Math. Soc., 137 (2009), 3869. doi: 10.1090/S0002-9939-09-09968-7. Google Scholar
J. Sjöstrand, An algebra of pseudodifferential operators,, Math. Res. Lett, 1 (1994), 185. Google Scholar
E. M. Stein, "Beijing Lectures In Harmonic Analysis," Annals of Mathematics Studies,, \textbf{112}, 112 (1982). Google Scholar
H. Triebel, "Theory of Function Spaces,", Mathematik und ihre anwendugen in Physik und Technik [Mathematics and its Applications in Physics and Technology], 38 (1983). doi: 10.1007/978-3-0346-0416-1. Google Scholar
J. Toft, Continuity properties for modulation spaces with applications to pseudo-differential calculus. II,, Ann Global Anal Geom, 26 (2004), 73. doi: 10.1023/B:AGAG.0000023261.94488.f4. Google Scholar
B. Wang and H. Hudzik, The global Cauchy problem for the NLS and NLKG with small rough data,, J. Differential Equations, 232 (2007), 36. doi: 10.1016/j.jde.2006.09.004. Google Scholar
B. Wang, L. Han and C. Huang, Global well-posedness and scattering for the derivative nonlinear Schrödinger equation with small rough data,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 26 (2009), 2253. Google Scholar
B. Wang, C. Hao and C. Huo, "Introduction on Nonlinear Developing Equations,", Unpublished Lecture Notes, (2009). Google Scholar
B. Wang, L. Zhao and B. Guo, Isometric decomposition operators function spaces $E_{p,q}^{\lambda}$ and applications to nonlinear evolution equations,, J. Func. Anal., 233 (2006), 1. doi: 10.1016/j.jfa.2005.06.018. Google Scholar
Tadahiro Oh, Yuzhao Wang. On global well-posedness of the modified KdV equation in modulation spaces. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2020393
Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $ L^2 $-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298
José Luis López. A quantum approach to Keller-Segel dynamics via a dissipative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020376
Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276
Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259
Xiaorui Wang, Genqi Xu, Hao Chen. Uniform stabilization of 1-D Schrödinger equation with internal difference-type control. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021022
Van Duong Dinh. Random data theory for the cubic fourth-order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (2) : 651-680. doi: 10.3934/cpaa.2020284
Biyue Chen, Chunxiang Zhao, Chengkui Zhong. The global attractor for the wave equation with nonlocal strong damping. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021015
Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264
Haruki Umakoshi. A semilinear heat equation with initial data in negative Sobolev spaces. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 745-767. doi: 10.3934/dcdss.2020365
Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270
Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388
Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243
Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054
Pengyan Ding, Zhijian Yang. Well-posedness and attractor for a strongly damped wave equation with supercritical nonlinearity on $ \mathbb{R}^{N} $. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021006
Marc Homs-Dones. A generalization of the Babbage functional equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 899-919. doi: 10.3934/dcds.2020303
Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033
Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021004
Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002
Jiecheng Chen Dashan Fan Lijing Sun | CommonCrawl |
Home Journals RCMA Enhancement of Optical Parameters for PVA/PEG/Cr2O3 Nanocomposites for Photonics Fields
Enhancement of Optical Parameters for PVA/PEG/Cr2O3 Nanocomposites for Photonics Fields
Musaab Khudhur Mohammed | Mohammed Hashim Abbas* | Ahmed Hashim | Bahaa H. Rabee | Majeed Ali Habeeb | Noor Hamid
Department of Physics, College of Education for Pure Sciences, University of Babylon, Babylon 51002, Iraq
Medical Physics Department, Al-Mustaqbal University College, Babylon 51001, Iraq
[email protected]
In this study, many samples have been synthesized by using solution casting technique with different additive content of Chromium oxide nanoparticle (Cr2O3NPs), poly vinylalcohol (PVA) and polyethylene glycol (PEG). The UV-Vis. spectrophotometer used to record the absorbance spectrum in the range of (200-800) nm. The absorption of UV waves is improved while the transmittance is reduced when Cr2O3 NPs were added to the polymeric system which are useful for a number of applications including low-cost UV protection and solar radiation shield. When Cr2O3 NPs concentrations increased, the optical energy gap for indirect transition (allowed and forbidden) was decreased. Furthermore, all the optical constant has been improved.
Cr2O3, nanocomposites, optical properties, blend, nanoparticles
Polymer nanocomposites have recently gained popularity due to the unique properties that these materials can achieve. Metal [1-3] and semiconductor [4-6] nanoparticles exhibit extraordinary optical and electrical properties, and polymers are regarded to be a good host material for these nanoparticles. Similarly, Because of their high surface-to-bulk ratio, nanoparticles have a considerable impact on the matrix, producing in some exceptional properties that aren't available formed at one of the pure materials. More research into theto better predict the composite's final properties therefore, the impact of nanoparticles on the properties of a polymer matrix is required [7-9]. Lately, composites of polymer/ceramic filler obtain raised consideration related their attractive electronic and electrical characters, angular acceleration accelerometers, integrated decoupling capacitors, electronic packaging and acoustic emission sensors are several potential fields [10]. PVA is a semi crystalline polymer, offers a wide range of uses owing to the role of the OH collection and hydrogen bonding [11]. Because of its compatibility with the living body, it can also be used as a medical substance [12]. In addition, PVA can selectively absorb metallic ions like as copper, palladium, and mercury. PVA is made up of the chemical formulation (C2H4O)x, which has a density of (1.19-1.31)g/cm3 and a melting temperature of 230℃. Over 200℃, it degrades rapidly [13]. is a form of thermoplastic polymer with a flexibility of C–O–C bonding. It also possesses solubility in organic solvents, hydrophilicity, crystallinity y, and self-lubricating properties. As a result, PEG is one of the most widely used polymers for the creation and growth of a wide range of vital applications [14]. There are several studies on PEG and PVA nanocomposites for various applications like energy storage [15-17], antibacterial [18] and humidity sensors [19-21]. This paper aims to prepare the PVA-PEG-Cr2O3 nanocomposites and investigating its optical properties.
2. Experimental Work
Nanocomposites films of polyvinyl alcohol (PVA)/ polyethylene glycol (PEG) with different contents of chromium oxide (Cr2O3) nanoparticles were prepared by casting process. The PVA/PEG blend was prepared with ratio(70%PVA/30%PEG) by dissolving of 1 gm in distilled water (30 ml). The Cr2O3 NPs were added to the (PVA/PEG) blend with ratios 1%, 2%, and 3%. The optical characteristics of PVA/PEG/Cr2O3 nanocomposites were tested using spectrophotometer (UV-18000A-Shimadzu).
3. Result and Discussion
Figure 1 displays the influence of Cr2O3 NPs on the absorbance of PVA-PEG blend with wavelength range (200-800) nm. Because free electrons absorb incident light, the absorbance of nanocomposite increases as the concentration increases [22]. This result is agreement with previous studied [23, 24]. The high absorbance of nanocomposites at UV region due to the photon energy enough to interact with atoms lead to the high absorbance [25].
The optical transmittance of PVA-PEG-Cr2O3 nanocomposites is shown in Figure 2. The transmittance decreases when the Cr2O3 concentration in the PVA-PEG nanocomposite increasing from 1% to 3%, as shown in this figure.
The absorption coefficient (α)calculated by the equation [26]:
$\alpha=2.303 \frac{A}{t}$ (1)
where, (A) is the absorption and (t) is the specimen thickness.
The absorption coefficient versus photon energy are shown in Figure 3. When the increasing of the Cr2O3NPs concentration, the α increase. The increase of α is due to an increase in light absorption [27]. The nanocomposites are said to have an indirect energy gap if the value of α is less than 104 cm-1. The polymer blend had low α this may be as a result of low crystallinity [28, 29].
Figure 1. Influence of Cr2O3 NPs on the absorbance of PVA-PEG blend
Figure 2. Optical transmittance of PVA-PEG-Cr2O3 nanocomposites
Figure 3. Absorption coefficient of PVA-PEG-Cr2O3 nanocomposites versus photon energy
The energy gap is calculated using the Tauc relation [24]:
$\alpha h v=B\left(h v-E_g\right)^r$ (2)
where, Eg denotes the optical energy gap, r=2 or 3 denotes the allowed or forbidden indirect transition, hν denotes electromagnetic energy, and B is a constant.
By graphing (αhυ)1/2 and (αhυ)1/3 versus hν in Figures 4, 5, the band gap was calculated. The allowed energy gap decreased from 4eVfor the pure PVA-PEG to 3.4 eV for the PVA-PEG- 3% Cr2O3 nanocomposite and 3.2eV for the pure PVA-PEG to 2 eV for the PVA-PEG-3% Cr2O3 nanocomposite for the forbidden energy gap. The energy gap reduces with rise in the Cr2O3 NPs content which is due to the create of localized levels in the band gap [30, 31]. The value of energy gap are shown in Table 1.
Figure 4. (αhυ)1/2 versus hν of PVA-PEG-Cr2O3 nanocomposites
Table 1. Energy gap values of PVA-PEG-Cr2O3 nanocomposites
Cr2O3 NPs wt.%
Eg(eV)
Using the following relation to determine the extinction coefficient (K) [32]:
$K=\alpha \lambda / 4 \pi$ (3)
The extinction coefficient for (PVA-PEG-Cr2O3) nanocomposites is revealed in Figure 6 as a function of wavelength. It is worth noting that K increases as the concentration of Cr2O3NPs increases. This reason attribute to the enhancement of the absorption coefficient when the additive of Cr2O3NPs. This result agreement with the previous studied [33].
Figure 6. Extinction coefficient for (PVA-PEG-Cr2O3) nanocomposites
The refractive index (n) of (PVA-PEG-Cr2O3) nanocomposites was calculated by [34]:
$n=\left(1+R^{1 / 2}\right) /\left(1-R^{1 / 2}\right)$ (4)
The refraction index of (PVA-PEG-Cr2O3) nanocomposites versus of wavelength as shown in Figure 7. As revealed in the numeral, the refractive index tends to increase as the increase of Cr2O3NPsconcentration in the PVA-PEG film. The reason for this is that as the increase of Cr2O3 concentration, the density of the nanocomposites increases as well [35, 36].
Figure 7. Refraction index of (PVA-PEG-Cr2O3) nanocomposites versus wavelength
The following equations were used to calculate the real and imaginary (ε1 and ε2) portions of dielectric constant [37]:
$\varepsilon_1=n^2-k^2$ (5)
$\varepsilon_2=2 n k$ (6)
The variation of (ε1) versus of wavelength is indicated in Figure 8. Because of the low value of K2, the real dielectric constant increases as the concentration of Cr2O3 nanoparticles increases. The change in ε2versus of wavelength is shown in Figure 9. Due to the relationship between α and K, it should be said that ε2 is dependent on K values that vary with the absorption coefficient. This result is agreement with the previous studied [38].
Figure 8. Variation of (ε1) versus wavelength
Optical conductivity (σ) was determined using the equation [39]:
$\sigma=\alpha n c / 4 \pi$ (7)
In which c denotes the light speed, n the refractive index, and is the absorption coefficient. Figure 10 shows the optical conductivity of PVA-PEG-Cr2O3 nanocomposites versus of wavelength. (σ) of the PVA-PEG-Cr2O3 nanocomposite increases as the Cr2O3content increases.
Figure 10. Optical conductivity of PVA-PEG-Cr2O3 nanocomposites versus wavelength
This paper includes the preparation of (PVA-PEG-Cr2O3) nanocomposites and studying its optical properties. The obtained results indicated that improvement in the optical properties of the (PVA-PEG-Cr2O3) nanocomposite when adding different percentages of Cr2O3NPs. Therefore, the nanocomposite (PVA-PEG-Cr2O3) can be used in different application such as photodetector and low-cost UV protection.
[1] Akamatsu, K., Takei, S., Mizuhata, M., Kajinami, A., Deki, S., Takeoka, S., Fujii, M., Hayashi, S., Yamamoto, K. (2000). Preparation and characterization of polymer thin films containing silver and silver sulfide nanoparticles. Thin Solid Films, 359(1): 55-60. http://dx.doi.org/10.1016/S0040-6090(99)00684-7
[2] Zeng, R., Rong, M.Z., Zhang, M.Q., Liang, H.C., Zeng, H.M. (2002). Laser ablation of polymer-based silver nanocomposites. Appl. Surf. Sci., 187(3-4): 239247. https://doi.org/10.1016/S0169-4332(01)00991-6
[3] Zhu, Y.J., Qian, Y.T., Li, X.J., Zhang, M.W. (1998). A nonaqueous solution route to synthesis of polyacrylamide-silver nanocomposites at room temperature. Nanostruct. Mater., 10(4): 673-678. https://doi.org/10.1016/S0965-9773(98)00096-8
[4] Chen, W.M., Yuan, Y., Yan, L.F. (2000). Preparation of organic/inorganic nanocomposites with polyacrylamide (PAM) hydrogel by 60 Co γ irradiation. Mater. Res. Bull., 35(5): 807-812. https://doi.org/10.1016/S0025-5408(00)00266-X
[5] Zhang, Z.P., Han, M.Y. (2003). One-step preparation of size-selected and well-dispersed silver nanocrystals in polyacrylonitrile by simultaneous reduction and polymerization. J. Mater. Chem. Commun., 13(4): 641. https://doi.org/10.1039/B212428A
[6] Godovsky, D.Y. (2000). Device Applications of Polymer-Nanocomposites. In: Biopolymers PVA Hydrogels, Anionic Polymerisation Nanocomposites. Advances in Polymer Science, vol 153. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46414-X_4
[7] Qian, X.F., Yin, J., Huang, J.C., Yang, Y.F., Guo, X.X., Zhu, Z.K. (2001). The preparation and characterization of PVA/Ag2S nanocomposite. Mater. Chem. Phys., 68(1-3): 95-97. https://doi.org/10.1016/S0254-0584(00)00288-1
[8] Kumar, R.V., Elgamiel, R., Diamant, Y., Gedanken, A., Noewig, J. (2001). Sonochemical preparation and characterization of nanocrystalline copper oxide embedded in poly (vinyl alcohol) and its effect on crystal growth of copper oxide. Langmuir, 17(5): 1406-1410. https://doi.org/10.1021/la001331s
[9] Kumar, R.V., Palchik, O., Koltypin, Y., Diamant, Y., Gedanken, A. (2002). Sonochemical synthesis and characterization of Ag2S/PVA and CuS/PVA nanocomposite. Ultrason. Sonochem., 9(2): 65-70. https://doi.org/10.1016/S1350-4177(01)00100-6
[10] Al-Ramadhan, Z., Hashim, A., Kadham Algidsawi, A.J. (2011). The D.C electrical properties of (PVC-Al2O3) composites. AIP Conference Proceedings, 1400(1): 180. https://doi.org/10.1063/1.3663109
[11] Pacansky, J., Schneider, S. (1990). Electron beam chemistry in solid films of poly (vinyl alcohol): Exposures under vacuum and under nitrogen at atmospheric pressure; irradiation monitored using infrared spectroscopy. J. Phys. Chem., 94(7): 3166-3179. https://doi.org/10.1021/j100370a077
[12] Salim, O.M., Abdullah, H.J., Hamzah, M.Q., Tuama, A.N., Hasan, N.N., Roslan, M.S., Agam, M.A. (2019). Synthesis, characterization, and properties of polystyrene/SiO2 nanocomposite via sol-gel process. AIP Conference Proceedings, 2151(1): 020034. https://doi.org/10.1063/1.5124664
[13] Lei, R., Jie, X., Jun, X., Ruiyum, Z. (1994). Structure and properties of polyvinyl alcohol amidoxime chelate fiber. J. Appl. Polym. Sci., 53(3): 325-329. https://doi.org/10.1002/app.1994.070530309
[14] Falqi, F.H., Bin-Dahman, O.A., Hussain, M., Al-Harthi, M.A. (2018). Preparation of miscible PVA/PEG blends and effect of graphene concentration on thermal, crystallization, morphological, and mechanical properties of PVA/PEG (10wt %) blend. Int. J. Polym. Sci., 2018: 1-10. https://doi.org/10.1155/2018/8527693
[15] Rashid, F.L., Hadi, A., Al-Garah, N.H., Hashim, A. (2018). Novel phase change materials, MgO nanoparticles, and water based nanofluids for thermal energy storage and biomedical applications. International Journal of Pharmaceutical and Phytopharmacological Research, 8(1).
[16] Agool, I.R., Kadhim, K.J., Hashim, A. (2017). Synthesis of (PVA-PEG-PVP-ZrO2) nanocomposites for energy release and gamma shielding applications. International Journal of Plastics Technology, 21(2): 444-453. https://doi.org/10.1007/s12588-017-9196-1
[17] Agool, I.R., Kadhim, K.J., Hashim, A. (2016). Preparation of (polyvinyl alcohol–polyethylene glycol–polyvinyl pyrrolidinone–titanium oxide nanoparticles) nanocomposites: electrical properties for energy storage and release. International Journal of Plastics Technology, 20(1): 121-127. https://doi.org/10.1007/s12588-016-9144-5
[18] Kadhim, K.J., Agool, I.R., Hashim, A. (2016). Synthesis of (PVA-PEG-PVP-TiO2) nanocomposites for antibacterial application. Materials Focus, 5(5): 436-439. https://doi.org/10.1166/mat.2016.1371
[19] Hashim, A., Habeeb, M.A. (2019). Synthesis and characterization of polymer blend-CoFe2O4 nanoparticles as a humidity sensors for different temperatures. Transactions on Electrical and Electronic Materials. https://doi.org/10.1007/s42341-018-0081-1
[20] Ahmed, H., Abduljalil, H.M., Hashim, A. (2019). Analysis of structural, optical and electronic properties of polymeric nanocomposites/silicon carbide for humidity sensors. Transactions on Electrical and Electronic Materials, 20: 206-217. https://doi.org/10.1007/s42341-019-00100-2
[21] Hashim, A., Hamad, Z.S. (2019). Fabrication and characterization of polymer blend doped with metal carbide nanoparticles for humidity sensors. J. Nanostruct., 9(2): 340-348. https://doi.org/10.22052/JNS.2019.02.016
[22] Habeeb, M.A., Hashim, A., Hadi, A. (2017). Fabrication of new nanocomposites: CMC-PAA-PbO2 nanoparticles for piezoelectric sensors and gamma radiation shielding applications. Sensor Letters, 15(9): 1-6. https://doi.org/10.1166/sl.2017.3877
[23] Hashim, A., Habeeb, M.A., Khalaf, A., Hadi, A. (2017). Fabrication of (PVA-PAA) blend-extracts of plants bio-composites and studying their structural, electrical and optical properties for humidity sensors applications. Sensor Letters, 15(7): 589-596. https://doi.org/10.1166/sl.2017.3856
[24] Asogwa, P.U. (2011). Band gap shift and optical characterization of Capped PbO thin films: Effect of thermal annealing. Chalcogenide Letters, 8(3): 163-170.
[25] Hashim, A., Hadi, A. (2017). Synthesis and characterization of (MgO-Y2O3-CuO) nanocomposites for novel humidity sensor application. Sensor Letters, 15(10): 1-4. https://doi.org/10.1166/sl.2017.3900
[26] Donald, A.N. (1992). Semiconductors physics and devices. Mexico University.
[27] Hadi, S., Hashim, A., Jewad, A. (2011). Optical properties of (PVA-LiF) composites. Australian Journal of Basic and Applied Sciences, 5(9): 2192-2195.
[28] Jasim, F.A., Lafta, F., Hashim, A., Ali, M., Hadi, A.G. (2013). Characterization of palm fronds-polystyrene composites. Journal of Engineering and Applied Sciences, 8(5): 140-142.
[29] Jasim, F.A., Hashim, A., Hadi, A.G., Lafta, F., Salman, S.R., Ahmed, H. (2013). Preparation of (pomegranate peel-polystyrene) composites and study their optical properties. Research Journal of Applied Sciences, 8(9): 439-441. https://doi.org/10.3923/rjasci.2013.439.441
[30] Hashim, A. (2021). Fabrication and characteristics of flexible, lightweight, and low-cost pressure sensors based on PVA/SiO2/SiC nanostructures. J Mater Sci: Mater Electron, 32: 2796-2804. https://doi.org/10.1007/s10854-020-05032-9
[31] Hashim, A., Hamad, Z.S. (2020). Lower cost and higher UV-absorption of polyvinyl alcohol/ silica nanocomposites for potential applications. Egypt. J. Chem., 63(2): 461-470. https://doi.org/10.21608/EJCHEM.2019.7264.1593
[32] Mohammed, M.K., Al-Dahash, G., Al-Nafiey, A. (2020). Synthesis and characterization of PVA-Graphene-Ag nanocomposite by using laser ablation technique. Journal of Physics: Conference Series, 1591(1).
[33] Crane, M., Hassan, Y. (1989). Solar cells. Collage of Education, University of Mousl.
[34] Mahalakshmi, K., Lakshmi, V., Dhivyachristoanitha, S. Mary Jenila, R. (2021). Optical, structural and morphological analysis of rGO decorated CoSe2 nanocomposites. International Journal of Innovative Science, Engineering & Technology, 8(2): 180-192. www.ijiset.com.
[35] Rashid, F.L., Hashim, A., Habeeb, M.A., Salman, S.R., Ahmed, H. (2013). Preparation of PS-PMMA copolymer and study the effect of sodium fluoride on its optical properties. Journal of Engineering and Applied Sciences, 8(5): 137-139.
[36] Hashim, A. (2020). Enhanced structural, optical, and electronic properties of In2O3 and Cr2O3 nanoparticles doped polymer blend for flexible electronics and potential applications. Journal of Inorganic and Organometallic Polymers and Materials, 30: 3894-3906. https://doi.org/10.1007/s10904-020-01528-3
[37] Amin, P.O., Ketuly, K.A., Saeed, S.R., Muhammadsharif, F.F., Symes, M.D., Paul, A., Sulaiman, K. (2021). Synthesis, spectroscopic, electrochemical and photophysical properties of high band gap polymers for potential applications in semi-transparent solar cells. BMC Chemistry, 15: 25. https://doi.org/10.1186/s13065-021-00751-4
[38] Asogwa, P.U. (2011). Band gap shift and optical characterization of PVA Capped PbO thin films: Effect of thermal annealing. Chalcogenid Lett., 8(3): 163-170.
[39] Upadhyay, V.S., Dubey, S.K., Singh, A., Tripathi, S. (2014). Structural, optical and morphological properties of PVA/ Fe2O3 nanocomposite thin films, IJCPS, 3(4). | CommonCrawl |
\begin{document}
\title{Intrinsic Isometric Embeddings of Pro-Euclidean Spaces}
\author{B. Minemyer}
\address{Division of Mathematics, Alfred University, Alfred, New York 14802}
\email{[email protected]}
\date{November 27, 2013.}
\keywords{Differential geometry, Discrete geometry, Metric Geometry, Euclidean polyhedra, Polyhedral Space, intrinsic isometry, isometric embedding}
\begin{abstract}
In \cite{Petrunin1} Petrunin proves that a metric space $\mathcal{X}$ admits an intrinsic isometry into $\mathbb{E}^n$ if and only if $\mathcal{X}$ is a pro-Euclidean space of rank at most $n$. He then shows that either case implies that $\mathcal{X}$ has covering dimension $\leq \, n$. In this paper we extend this result to include embeddings. Namely, we first prove that any pro-Euclidean space of rank at most $n$ admits an intrinsic isometric embedding into $\mathbb{E}^{2n+1}$. We then discuss how Petrunin's result implies a partial converse to this result.
\end{abstract}
\maketitle
\section{Introduction}\label{Introduction}
In \cite{Petrunin1} Petrunin proves the following Theorem:
\begin{theorem}[Petrunin]\label{Petrunin}
A compact metric space $\mathcal{X}$ admits an intrinsic\footnote{For the definition of an intrinsic isometry, please see \cite{Petrunin1}} isometry into $\mathbb{E}^n$ if and only if $\mathcal{X}$ is a pro-Euclidean space of rank at most $n$. Either of these statements implies that dim($\mathcal{X}$) $\leq$ $n$ where \text{dim}($\mathcal{X}$) denotes the covering dimension of $\mathcal{X}$.
\end{theorem}
A metric space $\mathcal{X}$ is called a \emph{pro-Euclidean space of rank at most $n$} if it can be represented as an inverse limit of a sequence of $n$-dimensional Euclidean polyhedra $\{ \mathcal{P}_i \}$. In this definition, the inverse limit is taken in the category with objects metric spaces and morphisms short\footnote{1-Lipschitz} maps. So the limit $\displaystyle{\lim_{\longleftarrow}\mathcal{P}_i = \mathcal{X}}$ means that the sequence $\{ \mathcal{P}_i \}$ converges to $\mathcal{X}$ in both the topological and metric sense. The morphisms being short maps means that \emph{all} maps involved in the inverse system are short, including the projection maps.
In \cite{Minemyer2} the following Theorem is proved:
\begin{theorem}[M]\label{Minemyer}
Let $\mathcal{P}$ be an $n$-dimensional Euclidean polyhedron, let $f:\mathcal{P} \rightarrow \mathbb{E}^N$ be a short map, and let $\epsilon>0$ be arbitrary. Then there exists an intrinsic isometric embedding $h: \mathcal{P} \rightarrow \mathbb{E}^N$ which is an $\epsilon$-approximation of $f$, meaning $ |f(x) - h(x)| < \epsilon $ for all $x \in \mathcal{X}$, provided $N \geq 2n + 1$.
\end{theorem}
Combining Theorem \ref{Minemyer} with some methods used by Nash in \cite{Nash1} and Petrunin in \cite{Petrunin1} we can prove the following:
\begin{theorem}\label{Main Theorem}
Let $\mathcal{X}$ be a compact pro-Euclidean space of rank at most $n$. Then $\mathcal{X}$ admits an intrinsic isometric embedding into $\mathbb{E}^{2n+1}$.
\end{theorem}
Theorem \ref{Main Theorem} extends half of Theorem \ref{Petrunin} to the case of intrinsic isometric embeddings. What Theorem \ref{Main Theorem} does \emph{not} prove is that, if $\mathcal{X}$ admits an intrinsic isometric embedding into $\mathbb{E}^{2n+1}$, then $\mathcal{X}$ is a pro-Euclidean space of rank at most $n$. The (main) reason that Theorem \ref{Main Theorem} does not say this is because it is not true! If $\mathcal{X}$ is a pro-Euclidean space with rank at most $n$, then dim($\mathcal{X}$) $\leq \, n$. But there are many metric spaces with covering dimension greater than $n$ that admit intrinsic isometric embeddings into $\mathbb{E}^{2n+1}$ (a simple example is the $2n$-sphere).
A metric space $\mathcal{X}$ is a \emph{pro-Euclidean space of finite rank} if $\mathcal{X}$ can be written as an inverse limit of a sequence of Euclidean polyhedra $\{ \mathcal{P}_i \}$ and if there exists a natural number $N$ such that dim($\mathcal{P}_i$) $\leq \, N$ for all $i$. Again we require that the inverse limit take place in the category of metric spaces with short maps. Then what we can say by using Theorems \ref{Petrunin} and \ref{Main Theorem} is the following:
\begin{theorem}\label{Main Theorem 2}
A compact metric space $\mathcal{X}$ admits an intrinsic isometric embedding into $\mathbb{E}^N$ for some $N$ if and only if $\mathcal{X}$ is a pro-Euclidean space of finite rank.
\end{theorem}
\section{Proof of Theorem \ref{Main Theorem}}
\begin{proof}[Proof of Theorem \ref{Main Theorem}]
Let $\mathcal{X}$ be a pro-Euclidean space of rank at most $n$ and let $(\mathcal{P}_i, \varphi_{j, i})$ be the inverse system associated to $\mathcal{X}$ where $\mathcal{P}_i$ is an $n$-dimensional Euclidean polyhedron for all $i$. For each $i$ let $\psi_i: \mathcal{X} \rightarrow \mathcal{P}_i$ be the projection map. Remember that every map associated with this system is short.
Given $\epsilon_{i + 1} > 0$ and a pl intrinsic isometric embedding $f_i:\mathcal{P}_i \rightarrow \mathbb{E}^{2n + 1}$, by Theorem \ref{Minemyer} there exists a pl intrinsic isometric embedding $f_{i + 1}: \mathcal{P}_{i + 1} \rightarrow \mathbb{E}^{2n + 1}$ such that
$$ |f_{i + 1}(x) - (f_i \circ \varphi_{i + 1, i})(x)|_{\mathbb{E}^{2n + 1}} < \epsilon_{i + 1} $$
\noindent for all $x \in \mathcal{P}_{i + 1}$.
Then define $h_i:= f_i \circ \psi_i$ for all $i$. To keep track of all of the maps involved, please see Figure \ref{tenthfig}. What needs to be shown is that the values for $\epsilon_i$ can be chosen in such a way that the sequence $\{ h_i \}_{i = 0}^{\infty}$ converges uniformally to an intrinsic isometric embedding. The fact that the sequence $\{ \epsilon_i \}_{i = 1}^{\infty}$ can be chosen so that the sequence $\{ h_i \}_{i = 0}^{\infty}$ converges uniformally to an intrinsic isometry is identical to the proof by Petrunin in \cite{Petrunin1} and is omitted here.
To see that we can choose the sequence $\{ \epsilon_i \}_{i = 1}^{\infty}$ so that the sequence $\{ h_i \}_{i = 0}^{\infty}$ converges to an embedding just consider the collection of sets
$$ \Omega_i := \{ (x, x') \in \mathcal{X} \times \mathcal{X} \, | \, d_{\X} (x, x') \geq 2^{-i} \} . $$
Since $\displaystyle{\mathcal{X} = \lim_{\longleftarrow}\mathcal{P}_i}$ and because $\Omega_i$ is compact, for every $i \in \mathbb{N}$ there exists $i'$ such that $\psi_{i'}(x) \neq \psi_{i'}(x')$ for all $(x, x') \in \Omega_i$. For every $i$ choose $i' > (i - 1)'$ which satisfies the above. Thus $h_{i'}(x) \neq h_{i'}(x')$ for all $(x, x') \in \Omega_i$. Then we let
$$\delta_i := \text{inf} \{ |h_{i'}(x) - h_{i'}(x')|_{\mathbb{E}^{2n + 1}} \, | \, (x, x') \in \Omega_i \} > 0. $$
If we choose $\epsilon_i < \frac{1}{4} \text{min} \{ \delta_i , \epsilon_{i-1} \} $ then no point pair in $\Omega_i$ can come together in the limit. Eventually any pair of distinct points is contained in some $\Omega_i$, which completes the proof.
\begin{figure}
\caption{Diagram for the proof of Theorem \ref{Main Theorem}.}
\label{tenthfig}
\end{figure}
\end{proof}
\end{document} | arXiv |
\begin{document}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\beta}{\beta}
\newcommand{\alpha}{\alpha}
\newcommand{\lambda_\alpha}{\lambda_\alpha}
\newcommand{\lambda_\beta}{\lambda_\beta}
\newcommand{|\Omega|}{|\Omega|}
\newcommand{|D|}{|D|}
\newcommand{\Omega}{\Omega}
\newcommand{H^1_0(\Omega)}{H^1_0(\Omega)}
\newcommand{L^2(\Omega)}{L^2(\Omega)}
\newcommand{\lambda}{\lambda}
\newcommand{\varrho}{\varrho}
\newcommand{\chi_{D}}{\chi_{D}}
\newcommand{\chi_{D^c}}{\chi_{D^c}}
\def\mathop{\,\rlap{--}\!\!\int}\nolimits{\mathop{\,\rlap{--}\!\!\int}\nolimits}
\newtheorem{thm}{Theorem}[section]
\newtheorem{cor}[thm]{Corollary}
\newtheorem{lem}[thm]{Lemma}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{assumption}{Assumption}
\theoremstyle{definition}
\newtheorem{defn}{Definition}[section]
\newtheorem{exam}{Example}[section]
\theoremstyle{remark}
\newtheorem{rem}{Remark}[section]
\numberwithin{equation}{section}
\renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}}
\numberwithin{equation}{section}
\title[ Singularly perturbed elliptic systems ]{On a Class of Singularly Perturbed Elliptic Systems with Asymptotic Phase Segregation}
\author[Bozorgnia, Burger ]{Farid Bozorgnia, Martin Burger}
\address{Department of Mathematics, Instituto Superior T\'{e}cnico, Lisbon. } \email{[email protected]}
\address{Department Mathematik, Friedrich-Alexander Universit\"at Erlangen-Nürnberg, Erlangen (FAU) } \email{ [email protected] }
\thanks{The corresponding author, F. Bozorgnia was supported by the Portuguese National Science Foundation through FCT fellowships SFRH/BPD/33962/2009}
\date{\today}
\begin{abstract}
This work is devoted to study of a class of elliptic singular perturbed systems and their singular limit to a phase segregating system. We prove existence and uniqueness and study the asymptotic behaviour with convergence to a limiting problem as the interaction rate tends to infinity. The limiting problem is a free boundary problem such that at each point in the domain at least one of the components is zero which implies simultaneously all components can not coexist. We present a novel method, which provides an explicit solution of limiting problem for special choice of parameters. Moreover, we present some numerical simulations of the asymptotic problem.
\end{abstract}
\maketitle
\textbf{Keywords}: Singular perturbed system, segregation, free boundary problems, numerical approximation. \\
2010 MSC:58J37,35R35, 34K10.
\section{Introduction and problem setting}
In order to model strong interaction between multiple components with reaction and diffusion, different models have been proposed. Among these models the adjacent segregation models have been extensively studied from different point of views, to see about theoretical aspects we refer to \cite{CL, CR, DancerDu, Ya}. Most of the works are related to the case of two components, while \cite{CL} considers an extension to multiple components with strict segregation. Here we consider a different extension to multiple components that is still consist with the other models for the case of two components, the segregation behaviour is of different type for multiple ones however.
Let $\Omega $ be bounded domain with $C^{1, \alpha}$ smooth boundary. The model describes the steady state of $m$ species diffusing and interacting between all component in $\Omega.$ Let $u_{i}(x)$ denote the population density of the $i^{\textrm{th}}$ component. We study the following singular elliptic system introduced in \cite{CR}, with unknowns $U^{\varepsilon}=(u_{1}^{\varepsilon}, \cdots, u_{m}^{\varepsilon})$ which satisfy
\begin{equation}\label{s0} \left \{ \begin{array}{llll} \Delta u_{i}^{\varepsilon}= \frac{ A_{i}(x) }{\varepsilon} F(u_{1}^{\varepsilon},\cdots, u_{m}^{\varepsilon}) & \textrm{ in } \Omega,\\ u_{i}^{\varepsilon} \ge 0 & \textrm{ in } \Omega,\\ u_{i} =\phi_{i} \, & \textrm{ on } \partial \Omega,\\
\end{array} \right. \end{equation}
for $i=1, \cdots, m$. Here the function $F:\mathbb{R}^m \rightarrow \mathbb{R}$ is given by \[ F(u_1, \cdots ,u_m)= \prod\limits_{j=1}^{m} u_{j}^{\alpha_j}, \]
for an $m$-tuple $ (\alpha_1, \cdots, \alpha_m )$ with $\alpha_i \ge 1.$
The main assumptions on boundary values and data are as below: \begin{assumption}
The boundary data $\phi_{i}$ are non-negative $C^{1, \alpha}$ functions with following partial segregation property \[ \prod_{i=1}^{m} \phi_{i}=0 \quad \textrm{on} \,\, \partial \Omega. \] \end{assumption}
\begin{assumption}
The functions $ A_{i}(x)$ are smooth, positive and satisfy
$$0< A_{i}(x) \le \sum_{j\neq i} A_{j}(x) \quad \textrm{in} \, \, \Omega.$$
\end{assumption}
The system (\ref{s0}) and the limiting system for $\epsilon \downarrow 0$ appear in theory of flames and are related to a model called Burke-Schumann approximation.
The main assumption in Burke-Schumann model is that oxidizer and reactant mix on a thin sheet and the flame precisely occurs there. A way to justify the underlying assumption is to introduce a large parameter called Damk\"{o}hler number, denoted by $D_a$, which is the parameter measuring the intensity of the reaction (see \cite{Will}). Then, the a chemical reaction is described by \[ \textrm{Oxidizer} +\textrm{ Fuel} \rightarrow \textrm{Products}. \] Let $Y_O$ and $Y_F$, respectively, denote the mass fraction of the oxidizer and the fuel, then they satisfy the following system \begin{equation*} \left \{ \begin{array}{llll} -\Delta Y_O + v(x).\nabla Y_O= D_a \, Y_O \, Y_F & \text{ in } \Omega,\\ -\Delta Y_F + v(x).\nabla Y_F= D_a \, Y_O \, Y_F & \text{ in } \Omega,
\end{array} \right. \end{equation*} with given incompressible velocity field $v$ and a Dirichlet boundary condition on $\partial \Omega$.
In \cite{CR} a general H\"{o}lder estimate for a class of singular perturbed elliptic system (\ref{s0}) is shown. The authors applied this estimate to the well-known Burke-Schumann approximation in flame theory. Also they study the classical cases i,e., equidiffusional case with high activation energy approximation, non- equidiffusional case, and to nonlinear diffusion models. The limiting problems are nonlinear elliptic equations; they have H\"{o}lder or Lipschitz maximal global regularity.
We point out that L. Caffarelli and F. Lin in \cite{CL} studied the following system with different coupling term
\begin{align}\label{f20} \begin{cases} \Delta u_{i}^{\varepsilon}= \frac{ 1 }{\varepsilon} u_{i}^{\varepsilon} \sum\limits_{j \neq i} u_{j}^{\varepsilon} (x)\qquad\qquad & \text{ in } \Omega,\\ u_{i}^{\varepsilon} \ge 0,\; & \text{ in } \Omega,\\ u^{\varepsilon}_{i}(x) =\phi_{i}(x) & \text{ on} \, \partial \Omega,\\
i=1,\cdots, m,
\end{cases} \end{align} where the boundary values satisfy \[ \phi_{i}(x) \cdot \phi_{j}(x)=0, \quad i \neq j \textrm{ on the boundary}. \]
\begin{rem}
In system (\ref{s0}) choosing $m=2$ and
\[
A_{i}(x)=1, \quad \alpha_{i}=1, \, i=1,2,
\]
we get system (\ref{f20}) for $m=2$ which has been studied extensively. Thus in (\ref{s0}) we are interested when $ m\ge 3. $
\end{rem}
To see different theoretical aspects of the system (\ref{f20}) we refer to \cite{CL, Ya, W} and references therein. In \cite{CL} the authors study the asymptotic limit; as $\varepsilon $ tends to zero in system (\ref{f20}) and they show that limiting case yields to pairwise segregation. Furthermore, it is shown that away from a closed subset of the Hausdorff dimension less or equal $n-2 $ the free interfaces between various components are, in fact, $C^{1,\alpha}$ smooth hyper surfaces.
For the numerical approximation of the system (\ref{f20}) we refer to \cite{BA,Bozorg1}. In \cite{BA} the authors propose a numerical scheme for a class of reaction-diffusion system with $m$ densities having disjoint supports and are governed by a minimization problem. The proposed numerical scheme is applied for the spatial segregation limit of diffusive Lotka-Volterra models in presence of high competition and inhomogeneous Dirichlet boundary conditions. In \cite{AA} the proof of convergence of the finite difference scheme for a general class of the spatial segregation of reaction- diffusion, is given.
This work is devoted to analyse existence and uniqueness results for system (\ref{s0}), as well as a study of the qualitative properties of solutions to (\ref{s0}) as $\varepsilon$ tends to zero. A particular novelty of the current work is to provide an explicit solution for an arbitrary number of components $m$ when the parameter $\varepsilon$ tends to zero in the following system
\begin{equation}\label{s1} \left \{ \begin{array}{lll} \Delta u_{i}^{\varepsilon}= \frac{A_{i}(x) }{\varepsilon} \prod\limits_{j=1}^{m} u_{i}^{\varepsilon} & \text{ in } \Omega,\\ u_{i}^{\varepsilon} \ge 0, & \text{ in } \Omega,\\ u_{i}(x) =\phi_{i}(x) \, & \text{ on } \partial \Omega,
\end{array} \right. \end{equation} For the cases $A_{i}(x)$ be same or are constants.
The outline of this paper is as follows: Section 2 consists the proof of existence and uniqueness of system (\ref{s0}). Section 3 deals with the limiting case as $\varepsilon$ tends to zero. In Section 4 we give an explicit solution for limiting case together with a rate of convergence. Section 5 provides some numerical simulations of the singular limit.
\section{ Analysis of the model for fixed $\varepsilon$}
In this section we prove existence and uniqueness of the solution of System (\ref{s0}) for fixed $\varepsilon$. The proof is constructive and we implement it to obtain numerical approximation of (\ref{s1})
Consider the following related time dependent parabolic system \begin{equation}\label{P1} \left \{ \begin{array}{llll} \frac{ \partial u_{i}^{\varepsilon} }{\partial t} - \Delta u_{i}^{\varepsilon}= - \frac{ A_{i}(x) }{\varepsilon} F(u_{1}^{\varepsilon},\cdots, u_{m}^{\varepsilon}) & \text{ in } \Omega \times (0, T)\\ u_{i}^{\varepsilon} (\cdot, 0)= u_{i0} & \text{ in } \Omega,\\ u_{i}(x,t) =\phi_{i}(x) \, & \text{ on } \partial \Omega\times[0, T),\\
\end{array} \right. \end{equation}
where in (\ref{P1}) the initial values $ u_{i0}, \, i=1, \cdots m$ are non-negative and compatible with boundary data. Then by Theorem 2.1 in \cite{EE} we obtain
\[
u_{i}^{\varepsilon}(x, t) \ge 0, \quad t>0.
\]
Also it is straight to show that as $t$ tends to infinity
\[
u_{i}^{\varepsilon}(x, t) \rightarrow u_{i}^{\varepsilon}(x),
\]
with $ u_{i}^{\varepsilon}(x)$ being the solution of (\ref{s0}), see \cite{CDH, MS}.
Let $(u_{1}^{\varepsilon}, \cdots ,u_{m}^{\varepsilon})$ be a positive solution of the system (\ref{s1}) then \[
u_{i} \le M \quad i=1,\cdots ,m,
\]
where \[ M= \underset{i=1, \cdots, m}{\max}\,\underset{ x\in \partial \Omega}{\max}\, \phi_{i}(x). \]
We denote the harmonic extension of boundary data $\phi_{i}$ with $ u_{i}^{0}$. We multiply the following equation \[ \Delta (u_{i}^{\varepsilon}- u_{i}^{0}) = \frac{A_i(x) }{\varepsilon} \prod\limits_{j=1}^{m} F(u_{1}^{\varepsilon}, \cdots, u_{m}^{\varepsilon}). \] by $(u^{\varepsilon}_{i} - u_{i}^{0})^{+}$ where $u^{+}(x)=\max(u(x), 0).$ Then integrating by parts gives
\[
-\int_{\Omega} |\nabla( u_{i}^{\varepsilon}- u_{i}^{0} )^{+} |^{2} dx= \int_{\Omega}\frac{A_i(x) }{\varepsilon} ( u_{i}^{\varepsilon} - u_{i}^{0})^{+} F(u_{1}^{\varepsilon}, \cdots, u_{m}^{\varepsilon})\, dx. \]
Note that the integrand of right hand side is positive and
\[
u_{i}^{\varepsilon}- u^{0}_{i}= 0 \quad \text{on } \partial\Omega.
\]
From here $ \int_{\Omega} |\nabla( u_{i}^{\varepsilon}- u_{i}^{0} )^{+} |^{2} dx= 0$ which implies
\[
u_{i}^{\varepsilon} \le u_{i}^ {0} \quad \textrm{in} \, \Omega.
\]
A standard maximum and nonnegativity principle for elliptic equations (cf. \cite{schaefer}) yields the following result. In sequel we use this result.
\begin{lem}\label{sysn1} Let $u \in H^1(\Omega)$ be a weak solution of the system
\begin{equation} \left \{ \begin{array}{lll} \Delta u = a \, u^{\alpha}(x) & \text{ in } \Omega,\\ u =\phi \, & \text{ on } \partial \Omega.
\end{array} \right. \end{equation} with $a $ and $\phi$ bounded and nonnegative, $\alpha\geq 1$ then \[ 0\le u \le M, \] where \[ M= \underset{ x\in \partial \Omega}{\max}\, \phi_{i}(x). \] \end{lem}
In the next Theorem \ref{sun0} we show the existence of nonnegative solutions to the original system. The main idea of the proof is to construct sub and super solution and decoupling the system in iterative way and to exploit the uniform $L^\infty$ bounds, see also the proof in \cite{W} for the proof of uniqueness of the solution for system (\ref{f20}).
\begin{thm}\label{sun0} For each $\varepsilon >0, $ there exist a unique nonnegative solution $$(u_{1}^{\varepsilon},\cdots, u_{m}^{\varepsilon}) \in H^1(\Omega)^m \cap L^\infty(\Omega)^m$$ of the system (\ref{s0}).
\end{thm}
\begin{proof}
Without loss of generality in the proof we set $ \alpha_{i} =1$ i.e.,
\[ F(u_1, \cdots ,u_m)= \prod\limits_{j=1}^{m} u_{j}. \]
To start, consider the harmonic extension $u_{i}^{0}$ given by
\begin{equation}\label{sys2}
\left \{
\begin{array}{llll}
- \Delta u_{i}^{0} = 0 & \text{ in } \Omega,\\
u_{i}^{0} =\phi_{i} & \text{on } \partial \Omega.
\end{array}
\right.
\end{equation} Next, given $ u_{i}^{k}$ consider the solution of the following linear system
\begin{equation}\label{sy4}
\left \{
\begin{array}{lllll}
\Delta u_{i}^{k+1}=\frac{A_{i}(x)}{\varepsilon} \, \frac{
u_{1}^{k} \cdots u_{i-1}^{k}u_{i}^{k+1} u_{i+1}^{k} \cdots u_{m}^{k} \, + \, u_{1}^{k+1} \cdots u_{i-1}^{k+1} u_{i}^{k+1} u_{i+1}^{k} \cdots u_{m}^{k}}{2} & \text{ in } \Omega,\\
u_{i}^{k+1}(x) =\phi_{i}(x) & \text{ on } \partial \Omega.\\
\end{array}
\right.
\end{equation} Note that we can subsequently solve the equations for increasing $i$ due to the triangular structure and always obtain a problem of the form considered in Lemma \ref{sysn1}, hence the uniform bounds apply.
We show that the following inequalities hold:
\[
u_{i}^{0}\ge u_{i}^{2} \cdots \ge u_{i}^{2k}\ge \dots \ge u_{i}^{2k+1}\ge \cdots \ge u^{3}_{i}\ge u_{i}^{1}, \quad \textrm{in} \, \Omega.
\]
The first iteration for $u_{1}$ reads as
\[
\Delta u_{1}^{1}= \frac{A_{1}(x)}{\varepsilon} u_{1}^{1} u_{2}^{0}\cdots u_{m}^{0}.
\]
Note that since $u_{i}^{0} \ge 0, $ and boundary conditions $\phi_{i}(x)$ are non negative then the weak maximum principle (see appendix) implies that $ u_{1}^{1}\ge 0.$ The equation for $u_{2}^{1}$ in (\ref{sy4}) is given by \[
\Delta u_{2}^{1}= \frac{A_{2}(x)}{2\varepsilon}\, (\, u_{1}^{0} u_{2}^{1}\, u_{3}^{0}\cdots u_{m}^{0} \, + \, u_{1}^{1} u_{2}^{1}\, u_{3}^{0}\cdots u_{m}^{0} \, ).
\]
Repeating the same argument, we obtain that $u_{2}^{1}\ge 0$ and consequently \[
u_{i}^{1}\ge 0, \quad i=3, \cdots ,m.
\]
Now we have
\begin{equation}\label{sy44}
\left \{
\begin{array}{ll}
\Delta u_{i}^{1}\ge 0 & \text{ in } \Omega,\\
u_{i}^{1}(x) =u_{i}^{0}(x)= \phi_{i}(x) & \text{ on } \partial \Omega.\\
\end{array}
\right.
\end{equation} Thus the comparison principle implies that $ u_{i}^{1}\le u_{i}^{0} $. The same argument shows
\[
u_{i}^{0} \ge u_{i}^{2}.
\]
In the next step we verify the following inequalities hold
\[ u_{i}^{2} \ge u_{i}^{1}\quad i=1, \cdots,m.
\]
To do this, one verifies that inequality $u_{1}^{2} \ge u_{1}^{1}$ holds then this fact can be used to prove inequality for $i=2,3,\cdots,m$. Then the same arguments show that
\[ u_{i}^{3} \ge u_{i}^{1}.
\]
To proceed more with induction, assume that
\begin{equation}\label{inq2}
u_{i}^{0}\ge u_{i}^{2}\ge \cdots \ge u_{i}^{2k}\ge u_{i}^{2k+1}\ge \cdots \ge u^{3}_{i}\ge u_{i}^{1}.
\end{equation}
We show that
\[
u_{i}^{2k+1}\le u_{i}^{2k+2}.
\]
To show this, first we check for $i=1$ and the same argument can be applied consequently.
By (\ref{sy4}) and the assumption in (\ref{inq2}) we have
\begin{equation*}
\left \{
\begin{array}{ll}
\Delta u_{1}^{2k+2}= \frac{A_{1}(x) }{\varepsilon} u_{1}^{2k+2} \prod\limits_{j=2}^{m}u_{j}^{2k+1} \le \frac{ 1 }{\varepsilon} u_{1}^{2k+2} \prod\limits_{j=2}^{m}u_{j}^{2k},\\\\
\Delta u_{1}^{2k+1}= \frac{A_{1}(x)}{\varepsilon} u_{1}^{2k+1} \prod_{j=2}^{m}u_{j}^{2k}.
\end{array}
\right.
\end{equation*}
Note that $ u_{1}^{2k+1} $ and $ u_{1}^{2k+2}$ have the same boundary value so by the comparison principle
\[
u_{1}^{2k+1}\le u_{1}^{2k+2}.
\]
Now we proceed for $i=2,\cdots,m$.
The same argument using the assumption $u_{i}^{2k+1} \ge u_{i}^{2k-1}$ shows that
\[
u_{i}^{2k+2}\le u_{i}^{2k}.
\]
For the next step, we use the fact from previous step which states $ u_{i}^{2k+2}\le u_{i}^{2k}$ to verify
$ u_{i}^{2k+3} \ge u_{i}^{2k+1}. $
Now let $ \overline{u}_{i} $ and $ \underline{u}_{i} $ be two families of functions such that
\[
u_{i}^{2k} \rightarrow \overline{u}_{i} \quad \textrm{uniformly in } \Omega,
\]
\[
u_{i}^{2k+1} \rightarrow \underline{u}_{i} \quad \textrm{uniformly in } \Omega.
\] Taking the limit in (\ref{sy4}) yields for $i=1,\cdots m$ the followings hold
\begin{equation}\label{s9}
\left \{
\begin{array}{llll}
\Delta \overline{u}_{i} = \frac{A_{i}(x)}{2\varepsilon}( \overline{u}_{1}\cdots \overline{u}_{i} \underline{u}_{i+1}\cdots \underline{u}_{m} +\underline{u}_{1}\cdots \underline{u}_{i-1} \overline{u}_{i} \underline{u}_{i+1} \cdots \underline{u}_{m}) & \text{ in } \Omega,\\
\Delta \underline{u}_{i} = \frac{ A_{i}(x) }{2\varepsilon}( \underline{u}_{1}\cdots \underline{u}_{i} \overline{u}_{i+1}\cdots \overline{u}_{m} + \overline{u}_{1}\cdots \overline{u}_{i-1}\underline{u}_{i} \overline{u}_{i+1}\cdots \overline{u}_{m} ) & \text{ in } \Omega.
\end{array}
\right.
\end{equation}
The inequality $ u_{i}^{2k} \ge u_{i}^{2k+1}$ implies that
\begin{equation}\label{in1}
\overline{u}_{i} \ge \underline{u}_{i}.
\end{equation}
We will show that in fact the equality holds. To do this, first consider the equations for the $m^{\textrm{th}}$
\begin{equation}\label{s10}
\left \{
\begin{array}{llll}
\Delta \overline{u}_{m} = \frac{ A_{m}(x) }{2\varepsilon}\, \overline{u}_{m}\left( \overline{u}_{1}\cdots \overline{u}_{i} \, \overline{u}_{i+1} \cdots \overline{u}_{m-1} + \underline{u}_{1}\cdots \underline{u}_{i} \underline{u}_{i+1} \cdots \underline{u}_{m-1} \right) & \text{ in } \Omega,\\
\Delta \underline{u}_{m} = \frac{A_{m}(x) }{2\varepsilon}\,\underline{u}_{m} \left(\underline{u}_{1}\cdots \underline{u}_{i} \underline{u}_{i+1} \cdots \underline{u}_{m-1} \, + \, \overline{u}_{1}\cdots \overline{u}_{i} \, \overline{u}_{i+1} \cdots \overline{u}_{m-1} \right) & \text{ in } \Omega,\\ \overline{u}_{m}= \underline{u}_{m}= \phi_{m}(x) & \text{ on } \partial \Omega,
\end{array}
\right.
\end{equation}
which implies
\[
\overline{u}_{m} = \underline{u}_{m}.
\]
Now by checking the equation for $i=m-1$ in (\ref{s9}) and using the previous fact
$\overline{u}_{m} = \underline{u}_{m},$ yields \[
\overline{u}_{m-1} = \underline{u}_{m-1},
\]
and argument is repeated backward which shows equality for every $i$.
To show uniqueness, assume there exists another positive solution $(w_1,\cdots,w_m)$ of system, then we show \[ u_{i}=w_{i}, \quad i=1, \cdots ,m. \] We will prove that the following equations hold:
\begin{equation}\label{ineq1} u_{i}^{2m+1} \le w_{i}\le u_{i}^{2m} , \quad \textrm{ for } \, m \ge 0.
\end{equation} To begin, we show that
\begin{equation}\label{ineq20} w_{i}\le u_{i}^{0}. \end{equation} This is a consequence of the fact that $w_{i}$ satisfies
\begin{equation*}
\left \{
\begin{array}{llll}
\Delta w_{i}\ge 0 & \text{ in } \Omega,\\
w_{i}=u^{0}_{i} & \text{on } \partial \Omega.
\end{array}
\right.
\end{equation*} Next we compare $w_{i}$ with $u_{i}^1$ and we show $ w_{i} \ge u_{i}^1$. As in existence part, first we check for $i=1$ in inequality follows from (\ref{ineq20}) and
\begin{equation*}
\left \{
\begin{array}{ll}
\Delta w_{1} =\frac{ A_1 w_1}{\varepsilon} \prod\limits_{j=2}^{m} w_j & \text{ in } \Omega,\\
\Delta u_{1}^1 =\frac{A_1 u_{1}^{1}}{\varepsilon} \prod\limits_{j=2}^{m} u_{j}^{0} & \text{ in } \Omega.
\end{array}
\right.
\end{equation*}
Now we proceed by induction and we assume that the claim is true until $2k+1.$ This means that we have \[ u_{i}^{2k+1}\le w_{i} \le u_{i}^{2k}. \] Then we show \[ u_{i}^{2k+3}\le w_{i} \le u_{i}^{2k+2}. \] Again comparing the equations for $w_i$ and $u_{i}^{2k+2} $ and the using assumption $ u_{i}^{2k+1} \le w_i$ yields the following inequality \[ w_{i} \le u_{i}^{2k+2}. \]
The same reasoning for inequality $ u_{i}^{2m+3}\le w_{i}$ holds. Now taking limit in (\ref{ineq1} ) shows that
\[
w_i=u_i, \quad i=1, \cdots,m.
\] \end{proof}
\section{Limiting problem}
In this section we study properties of the solution for system (\ref{s0}) to provide estimates and compactness results to pass to the limit as $\varepsilon $ tends to zero.
As we have seen in the last section, for each fixed $\varepsilon,$ the system (\ref{s0}) has a unique solution. Let $ U^{\varepsilon}=(u_{1}^{\varepsilon}, \cdots ,u_{m}^{\varepsilon} )$ be the unique positive solution of system (\ref{s0}) for fixed $\varepsilon,$ then $u_{i}^{\varepsilon} $ for $i=1, \cdots,m$ satisfy the following differential inequalities: \[
-\Delta u_{i}^{\varepsilon} \le 0 \quad \text{ in } \quad \Omega.
\]
Also define $ \widehat{u}_{i}^{\varepsilon}$ as
\begin{equation}\label{hat}
\widehat{u}_{i}^{\varepsilon} := u_{i}^{\varepsilon} - \sum_{j\neq i}u_{j}^{\varepsilon},
\end{equation}
then considering the assumption 2, it is easy to verify that
$$ - \Delta \widehat{u}_{i}^{\varepsilon} \ge 0. $$
Let $h_{i}$ and $ H _{i}$ for $i=1, \cdots m $ be harmonic with boundary value $\phi_{i} $ and
$\widehat{\phi}_{i}$ respectively, where \[
\widehat{\phi}_{i}=\phi_{i}- \sum_{j\neq i} \phi_{j},
\]
then we have
\[ H_{i} \le \widehat{u_i}^{ \varepsilon} \le u_{i}^{\varepsilon} \le h_{i}, \] which implies
\begin{equation}\label{EE} \frac{\partial h_{i}}{\partial \nu} \le
\frac{\partial u_{i}^{\varepsilon}}{\partial \nu}. \end{equation}
In this part we show that the solution $ u_{i}^{\varepsilon}$ of system (\ref{s0}) has bound in $W^{1,2}(\Omega) $ independently of $\varepsilon.$ To do this, we prove several lemmas. \begin{lem}\label{sunf1} Assume $x_0\in \Omega $ and $ B_{2r}(x_0) \subset \Omega$. Let $u$ satisfies the following
\begin{equation*} \left \{ \begin{array}{lll} \Delta u= f \ge 0 & \text{ in }\, B_{2r}(x_0),\\
0\le u \le M & \text{in } \, B_{2r}(x_0).
\end{array} \right. \end{equation*}
Then \[ \int_{B_{r}(x_0)} f(x) \, dx \le C_{0} M r^{n-2}, \]
for some $C_{0}$ that only depends on dimension $n$.
\end{lem}
\begin{proof}
Without loss of generality, assume $x_0=0$. By Green's formula for ball one has
\[
0\le u(0)=\mathop{\,\rlap{--}\!\!\int}\nolimits_{\partial B_{2r}(0)} u(x) \, dx - \int_{B_{2r}(0)} ( \frac{\omega_n}{|x|^{n-2}}- \frac{\omega_n}{(2r)^{n-2}}) f(x) \, dx
\]
\[
\le M - C_{0} \int_{B_{r}}\frac{ \omega_n}{|x|^{n-2}}f(x)\, dx \le M -\frac{C_{0} }{r^{n-2}}\, \omega_n \int_{B_{r}(0)}f(x)\, dx.
\]
Next, rearranging terms proves the Lemma.
\end{proof} \begin{lem}\label{sunf2} Assume that $u_{i}$ satisfies \begin{equation}\label{sunf3} \left \{ \begin{array}{lll} \Delta u_{i}= \frac{1}{\varepsilon} \prod\limits_{j=1 }^{m} \, u_{j} & \text{ in }\ \Omega, \\
u_{i} \ge 0 & \text{ in } \, \Omega,\\
u_{i} =\phi_{i} & \text{ on } \ \partial\Omega. \end{array} \right. \end{equation} Then there exists a constant $C_0$ depends only on $\Omega, n, r$ and
$\|\phi_{i}\|_{C^{1, \alpha}(\partial \Omega)} $ such that \[ \fint_{B_{r}(x_0) \cap \Omega} \frac{1}{\varepsilon} \prod\limits_{j=1 }^{m} \, u_{j}\, dx \le C_{0} r^{n-2}. \] \end{lem} \begin{proof} The proof consider different cases. \begin{enumerate}
\item If $B_{2r}(x_0) \subset \Omega$ then it follows by previous Lemma \ref{sunf1}.
\item If $ \exists k $ such that $ \phi_{k}=0 $ on $ \partial\Omega \cap B_{2r}(x_0)$ then we may extend $u_k$ to
\begin{equation*} \overline{u}_{k}= \left \{ \begin{array}{ll}
u_k & \text{ in }\ \Omega, \\
0 & \text{ in } \, \Omega^{c}.
\end{array} \right. \end{equation*}
and apply the previous Lemma to $\overline{u}_k.$
\item If none of $\phi_{k}$ vanishes on $ \partial\Omega \cap B_{2r}(x_0)$ then, since the product of boundary values is zero, there must be a $ \phi_{i}$ that vanishes at a point
$ y_1\in \partial\Omega \cap B_{2r}(x_0),$ we may assume that $ \phi_{1}(y_1)=0.$ Also, since
$ u_1 \ge 0 $ it follows that
\[ \frac{\partial u_{1}(y_1)}{\partial \nu} \le 0.
\] Now let $h_1$ solves \begin{equation}\label{sunf4} \left \{ \begin{array}{lll} \Delta h_1= 0 & \text{ in }\, B_{4r}(x_0) \cap \Omega, \\
h_1 =\phi_{1} & \text{ on } \ \partial\Omega \cap B_{4r}(x_0),\\
h_1= 0 & \text{ on} \, \Omega \cap \partial B_{4r}(x_0). \end{array} \right. \end{equation} Since $\partial \Omega$ and $ \phi_{1}$ are $C^{1, \alpha}$ it follows that \[
|\nabla h_1 | \le C_h, \quad \text{ in } \, \Omega \cap B_{3r}(x_0). \] Now either $w=u_1- h_1$ satisfies \[ -C_h \le \frac{\partial w }{\partial \nu} \le C_{*}, \]
for some $C_{*}$ (which we will decide ) in which case we may apply the previous Lemma on $w$ since \[ \Delta w= \frac{1}{\varepsilon} \prod\limits_{j }^{m} \, u_{j} + \frac{\partial w }{\partial \nu} \mathcal{H}^{n-1}\lfloor_{\partial\Omega} \quad \textrm{ in} \, B_{3r}(x_0), \] or there is a point $y_2 \in \partial \Omega \cap B_{3r}(x_0)$ such that
\[ \frac{\partial w(y_2)}{\partial \nu} \ge C_{*}.
\]
Note that $ \phi_{1} (y_2) > 0$ since otherwise,
\[
0 \ge \frac{\partial u_{1}(y_2)}{\partial \nu}= \frac{\partial w(y_2)}{\partial \nu}+ \frac{\partial h_{1}(y_2)}{\partial \nu} \ge C_{*} - C_h > 0,
\]
provided $C_*$ is large enough. Next, since $ \phi_{1} (y_2)> 0$ there is another $ \phi_{k}$ say
$ \phi_{2}$, such that $ \phi_{2} (y_2) =0.$ Let $h_2$ solves
\begin{equation}\label{sunf5} \left \{ \begin{array}{lll} \Delta h_2= 0 & \text{ in }\, B_{4r}(x_0) \cap \Omega \\
h_2 =\phi_{2} & \text{ on } \ \partial\Omega \cap B_{4r}(x_0),\\
h_2= 0 & \text{ on} \, \Omega \cap \partial B_{4r}(x_0). \end{array} \right. \end{equation}
Then again $ |\nabla h_2 | \le C_h$ in $ B_{3r}(x_0)$ for some $C_h$ depending only on the domain $\Omega$ and $\|\phi_{2}\|_{C^{1, \alpha }}$.
Next let $ u_2= w + h_2+ g$ in $B_{4r}(x_0)\cap \Omega $ where
\begin{equation}\label{sunf6} \left \{ \begin{array}{lll} \Delta g= 0 & \text{ in }\, B_{4r}(x_0) \cap \Omega \\
g =u_2 -w & \text{ on } \ \partial\Omega \cap B_{4r}(x_0),\\
g= 0 & \text{ on} \, \partial B_{4r}(x_0) \cap\Omega. \end{array} \right. \end{equation}
Since $g$ is bounded; $|g|\le 3 M$ on $\partial B_{4r}(x_0) \cap\Omega,$ then it follows that \[
|\nabla g| \le C_g, \quad \text{ in } B_{3r}(x_0) \cap \Omega, \] where $C_g$ depends on the bound $ M, r $ and $\Omega$. This leads in particular to
\[
0 \ge \frac{\partial u_{2}(y_2)}{\partial \nu}= \frac{\partial w(y_2)}{\partial \nu}+ \frac{\partial h_{1}(y_2)}{\partial \nu} + \frac{\partial g(y_2)}{\partial \nu} \ge C_{*} - C_h -C_g \ge 0.
\] This is a contradiction if $C_{*}$ is large enough and this complete the proof. \end{enumerate} \end{proof} \begin{prop}\label{sunf16} Let $ u_{1} , \cdots ,u_m$ be as in previous Lemma. Then there exists a constant $C_0$(independent of $\varepsilon$ such that \[
\| u_i\|_{W^{1,2}(\Omega)} \le C_0. \] \end{prop} \begin{proof}
Cover $\Omega$ by finitely say $N$ balls $B_{r}(x_k)$ and notice that
\[
\int_{\Omega} \frac{1}{\varepsilon} \prod\limits_{j }^{m} \, u_{j} \le \sum_{k=1}^{N} \int_{B_{r}(x_k) } \frac{1}{\varepsilon} \prod\limits_{j }^{m} \, u_{j} \le N\, C_{0} r^{n-2}.
\]
Next let $f=\frac{1}{\varepsilon} \prod\limits_{j=1 }^{m} \, u_{j} $ and define
\[
v= -\int_{\Omega} \frac{\omega_n}{|x-y|^{n-2} }f(y) \, dy.
\]
Then $v$ satisfies
\[
\Delta v= f \chi_{\Omega} \quad \text{in} \,\mathbb{R}^n,
\]
which implies
\begin{equation}\label{sunf7}
\int_{B_{R}(0)} |\nabla v|^{2}\, dx = \int_{B_{R}(0)} f(x) v(x) \, dx+ \int_{\partial B_{R}(0)} v\, \frac{\partial v}{\partial \nu}\, ds \le C.
\end{equation}
where $R$ is chosen so large that $\Omega \subset B_{R}(0).$ Now let $ u_i= H_i +v$ where
\begin{equation}\label{sunf8} \left \{ \begin{array}{lll} \Delta H_i= 0 & \text{ in }\, \Omega \\
H_i =\phi_i- v & \text{ on } \ \partial\Omega. \end{array} \right. \end{equation} Since $ \phi_i \in C^{1, \alpha} $ and $ v\in W^{1,2}(\Omega)$ by (\ref{sunf7}), then it follows that
$ H_i \in W^{1,2}(\Omega) $ with bounds only depending on $ \| v\|_{W^{1,2}(\Omega)}, \| \phi_i\|_{C^{1,\alpha}(\Omega)} $ and $(\Omega). $ In particular, $ \| u_i\|_{W^{1,2}(\Omega)}$ is bounded independent of $\varepsilon $. \end{proof}
The above Lemma shows that up to a subsequence denoted with $u_{i}^{\varepsilon} $ we get
\[
u_{i}^{\varepsilon} \rightharpoonup u_{i} \quad \text{in} \quad H^{1}_{0}(\Omega).
\]
The main result of this section is Theorem \ref{lim} which shows the asymptotic behaviour of system (\ref{s1})
as $\varepsilon$ tends to zero.
\begin{thm}\label{lim}
Let $U^{\varepsilon}=(u_{1}^{\varepsilon},\cdots ,u_{m}^{\varepsilon})$ be a solution of the system at fixed $\varepsilon$. Let $ \varepsilon $ tends to zero, then there exists $ U \in H^{1}(\Omega))^{m} \cap L^\infty(\Omega)^m$ such that for all $ i=1,\cdots,m$: \begin{enumerate}
\item $\Delta u_i \geq 0$ in the sense of distribution.
\item up to subsequences, $u_{i}^{\varepsilon}- u_{i} \rightarrow 0$ strongly in $H^{1}_{0} (\Omega)$.
\item $\prod\limits_{i}^{m} u_{i}=0$ a.e in \ $ \Omega.$
\end{enumerate} \end{thm}
\begin{proof}
Proposition (\ref{sunf16}) shows the existence of a weak limit $u_{i}$ such that, up to subsequences, \[ u_{i}^{\varepsilon}\rightharpoonup u_{i} \quad \textrm{in } \, H_{0}^{1}. \] The weak limit $u_{i}$ for $i=1, \cdots, m$ satisfy the following differential inequalities
\[
-\Delta u_{i} \le 0, \quad - \Delta \widehat{u}_{i} \ge 0 \text{ in } \quad \Omega,
\]
since we can pass to the weak limit in the differential inequalities for $ u_{i}^{\varepsilon}$ and $ \widehat{u}_{i}^{\varepsilon}.$ To show the strong convergence, we show that
\[
\int_{\Omega} | \nabla u_{i}^{\varepsilon} |^{2} \, dx \rightarrow \int_{\Omega} | \nabla u_{i}|^{2}\, dx.
\]
By weak lower semi continuously of Dirichlet norm just needs to show
\[
\int_{\Omega} | \nabla u_{i}|^{2} \, dx \ge {\lim \sup} \int_{\Omega} | \nabla u_{i}^{\varepsilon} |^{2} \, dx.
\] We multiply the inequality $-\Delta u_{i}^{\varepsilon} \le 0 $ by $u_{i}^{\varepsilon}$ and integration by parts,
\[
\int_{\Omega} |\nabla u_{i}^{\varepsilon}|^{2} \, dx - \int_{\partial \Omega} u_{i}^{\varepsilon} \frac{\partial u_{i}^{\varepsilon}}{\partial n} \, ds \le 0.
\]
This implies
\begin{equation}\label{1E}
\int_{\partial \Omega} u_{i} \frac{\partial u_{i} }{\partial n} \, ds \ge {\lim \sup} \int_{\Omega} |\nabla u_{i}^{\varepsilon}|^{2} \, dx.
\end{equation} Next we multiply the equation for $u_{i}^{\varepsilon} $ by $ u_{i}$ to obtain \[
- \int_{\Omega} \nabla u_{i}^{\varepsilon} \cdot \nabla u_{i} \, dx + \int_{\partial \Omega} u_{i} \frac{\partial u_{i}^{\varepsilon}}{\partial n} \, ds = \int_{\Omega} u_{i} \, \prod\limits_{j=1}^{m} u_{j}^{\varepsilon} \, dx.
\] Taking the limit as $\varepsilon_{n}$ tends to zero and considering the weak convergence of $u_{i}^{\varepsilon}$ and previous part to have
\begin{equation}\label{2E}
- \int_{\Omega} | \nabla u_{i} |^{2} \, dx + \int_{\partial \Omega} u_{i} \frac{\partial u_{i}}{\partial n} \, ds = 0.
\end{equation} Form (\ref{1E}) and (\ref{2E}) the result holds.
\item (2) Fix a point $x_{0} \in \Omega$ and let the index $i$ be such that
\[
u^{\varepsilon}_{i}(x_0)=\underset{ 1\le k\le m} {\max} u^{\varepsilon}_{k}(x_{0}).
\]
Now assume that $u_{i}^{\varepsilon}(x_{0})=c > 0 $ then by H\"{o}lder continuity there is $r$ such that
\[
|u_{i}^{\varepsilon}(x)- u_{i}^{\varepsilon}(x_{0})| \le \frac{c}{2}, \quad x \in B(x_{0}, r).
\]
Next we use the fact that the functions $u_{i}^{\varepsilon}$ for $i=1, \cdots,m$ are subharmonic, using the mean value property for subharmonic functions (see the proof of theorem 2.1 in \cite{GT})
\begin{equation} \label{eq_sun}
\begin{split}
\mathop{\,\rlap{--}\!\!\int}\nolimits_{\partial B(x_{0},r)} | u_{i}^{\varepsilon}(x_{0}) - u_{i}^{\varepsilon}(y) | \, dy & = \int_{0}^{r} ( \int_{ B(x_{0},s)} \Delta u_{i}^{\varepsilon} ) \, \frac{ds}{s^{n-1}} \\
& \ge r^{2} \mathop{\,\rlap{--}\!\!\int}\nolimits_{B(x_{0},r)} \Delta u_{i}^{\varepsilon} \, dx.
\end{split} \end{equation}
From here the following holds
\begin{equation}\label{E1}
\mathop{\,\rlap{--}\!\!\int}\nolimits_{B(x_{0},r)} \Delta u_{i}^{\varepsilon} \, dx = \mathop{\,\rlap{--}\!\!\int}\nolimits_{B(x_{0},r)}\frac{u_{i}^{\varepsilon}}{\varepsilon} \prod\limits_{j\neq i}^{m} u_{j}^{\varepsilon} (x)\le \frac{c}{2r^2}.
\end{equation}
Note that in the ball $B(x_{0},r)$ we have $u_{i}^{\varepsilon} \ge \frac{c}{2} $ so from (\ref{s1}) we obtain \begin{equation}\label{sunshine2}
\mathop{\,\rlap{--}\!\!\int}\nolimits_{B(x_{0},r)}\frac{1}{\varepsilon} \prod\limits_{j\neq i}^{m} u_{j}^{\varepsilon} (x)\, dx \le \frac{1}{r^2}. \end{equation} Next, in (\ref{sunshine2}) let $\varepsilon$ tend to zero which yields \[ \prod\limits_{j\neq i}^{m} u_{j}^{\varepsilon} (x) \rightarrow 0 \quad \text{in} \quad B(x_{0},r). \]
\end{proof}
Let $w_1$ be the first eigenfunction of the Laplace operator in $\Omega,$ i.e.,
\begin{equation*}
\left \{
\begin{array}{ll}
-\Delta w_{1}=\lambda_{1} w_1 & \text{ in } \Omega,\\
w_1=0 & \text{on } \partial \Omega.
\end{array}
\right.
\end{equation*}
The first eigenfunction does not change the sign and we may therefore take it to be positive and normalized it so that
$\| w_1\|_{L^\infty} =1.$ Multiplying the equation \[ \Delta u_{i}^{\varepsilon}= \frac{ A_i(x) }{\varepsilon} \prod\limits_{i=1}^{m} u_{i}^{\varepsilon} (x), \] by $w_1$ and integrating over $\Omega$ yields \[ \int_{\Omega} w_1\, \Delta u_{i}^{\varepsilon}\, dx = \int_{\Omega} \frac{ A_i(x) }{\varepsilon} \prod\limits_{i=1}^{m} u_{i}^{\varepsilon} (x) \, w_1 \, dx. \] Integration by parts and implementing that $w_1$ is zero on boundary, we obtain \begin{equation*} \begin{split} \int_{\Omega} \frac{ A_i(x) }{\varepsilon} \prod\limits_{i=1}^{m} u_{i}^{\varepsilon} (x) \, w_1 \, dx & = \int_{\Omega} u_{i}^{\varepsilon}\, \Delta w_1 \, dx- \int_{ \partial \Omega} u_i \, \frac{ \partial w_1 }{\partial n}\, ds \\
& = \lambda_{1} \int_{\Omega} u_{i}^{\varepsilon}\, w_1 \, dx- \int_{ \partial \Omega} \phi_{i} \, \frac{ \partial w_1 }{\partial n}\, ds. \end{split} \end{equation*}
Now from the bound on $u_i$ and the fact that normal derivative of the first eigenfunction on the boundary is bounded, we conclude
\[
\int_{\Omega} \frac{ A_i(x) }{\varepsilon} \prod\limits_{i=1}^{m} u_{i}^{\varepsilon} (x) \, w_1 \, dx \le C.
\]
We know that for $i=1, \cdots ,m$ the solution $u_{i}^{\varepsilon}$ are H\"{o}lder continuous
\[
\| u_{i}^{\varepsilon}\|_{C^{\alpha}} \le C_i,
\]
where the constant $C_i$ is independent of $\varepsilon.$
Note that, since
\[
w_{1}(x) > 0 \quad \textrm{in the interior of } \Omega,
\]
then the inequality in above yields that
\begin{equation}\label{bound1}
\int_{\Omega'} \Delta u_{i}^{\varepsilon} \le C \quad \text{ in compact subsets }\Omega' \subset \Omega,
\end{equation}
where the constant $C$ is independent of $ \varepsilon.$ For the rest, we show that for those points close enough
to the boundary, $\Delta u_{i}^{\varepsilon}$ remains bounded. Let $C_{i}$ and $\beta_{i} $ denote the H\"{o}lder constant and H\"{o}lder exponent of $u^{\varepsilon}_{i}.$ Choose the strip around boundary such that
\begin{equation}\label{dist}
\textrm{dist}(x, \partial \Omega) \le (\frac{\varepsilon}{C_{i}})^{1/ \beta_{i}} \quad \forall i=1, \cdots ,m.
\end{equation}
Let $y\in \partial \Omega$ be a point such that has minimum distance to $x$. Then by assumption on the boundary values, there is $k$ such that $u_{k}(y)=0$ and
\[
\frac{ |u^{\varepsilon}_{k}(x)- u^{\varepsilon}_{k}(y) | }{|x-y |^{\beta_{k}}} \le C_k.
\]
The previous inequality and (\ref{dist}) imply that
\begin{equation}\label{bound2}
u^{\varepsilon}_{k}(x) \le \varepsilon.
\end{equation} Combining (\ref{bound1}) and (\ref{bound2}) yields that Laplace of $u_i$ is bounded.
\begin{rem} The uniform bound of normal derivative of $ u_{i}^{\varepsilon} $ yields estimates for limiting problem as follows. Integrate from
\[
\Delta u_{i}^{\varepsilon}= \frac{A_i(x) }{\varepsilon} \prod\limits_{i=1}^{m} u_{i}^{\varepsilon} (x)
\]
to obtain \[ \int_{\partial\Omega} \frac{\partial u_{i}^{\varepsilon}}{\partial n}\, ds = \int_{\Omega} \frac{A_i(x) }{\varepsilon} \prod\limits_{i=1}^{m} u_{i}^{\varepsilon} (x)\, dx. \] From here we get
\[
\int_{\Omega} \frac{A_i(x) }{\varepsilon} \prod\limits_{i=1}^{m} u_{i}^{\varepsilon} (x)\, dx \le C
\]
this shows
\[
\int_{\Omega} A_i(x) \prod\limits_{i=1}^{m} u_{i}^{\varepsilon} (x)\, dx \rightarrow 0 \quad \text{ as $ \varepsilon$ tends to zero. }
\] \end{rem} \begin{defn} Consider the non empty sets $\Omega_{i}:={\{ x \in \Omega : u_{i}(x) =0 }\}$. Then the free boundaries (interfaces) are define as \[ \Gamma_{i,j} = \partial\Omega_{i} \cap \partial \Omega_{j}\cap \Omega. \] \end{defn} In the next Lemma we give the free boundary condition for the case $A_{i} =1$.
\begin{lem}\label{F3}
The following conditions holds on the free boundary $\Gamma_{i,j}.$ \begin{enumerate}
\item $
\frac{\partial u_i}{\partial n}|_{\Omega_j} =- \frac{\partial u_j}{\partial n}|_{\Omega_i},\\
$
\item
$
\frac{\partial u_k}{\partial n}|_{\Omega_j} - \frac{\partial u_k}{\partial n}|_{\Omega_i}=\frac{\partial u_i}{\partial n}|_{\Omega_j} \quad k\neq i,j.
$
\end{enumerate}
\end{lem}
\begin{proof}
Let $x_0$ be a free boundary point in $\Gamma_{i,j}.$
Note that
\[
\Delta (u_k -u_j) =0 \quad \text{in } B_{r}(x_0) \setminus \Gamma_{i,j}.
\]
In the sense of distribution we have
\[
\Delta (u_k -u_j) = \frac{\partial (u_k - u_j)}{\partial n} H^{n-1}|_{\Gamma_{i,j}} \quad \text{in } \, B_r,
\]
Splitting $B_r= (B_{r} \cap \Omega_i) \cup (B_{r} \cap \Omega_j) $ and considering the fact that in $\Omega_j $ we have $u_j = 0 $ the second relation is proved.
\end{proof}
\begin{rem}:
In \cite{W} the uniqueness of the limiting solution of system (\ref{f20}) for arbitrary number of components, is shown. Consider the metric space $ \sum $ defined by \[ \sum = {\{ (u_1, u_2 , \cdots, u_m) \in \mathbb{R}^m : u_{i}\ge 0,\, u_{i}\cdot u_{j}=0 \quad \text{for} \quad i\neq j\}}.
\] In \cite{W} (see Theorem 1.6 ) it is shown that the limiting solution $(u_1, \cdots ,u_m)$ of (\ref{f20}) is a harmonic map into the space $ \sum.$ By definition the harmonic map is the critical point of the following energy functional
\[
\int_{\Omega} \sum_{i=1}^{m} \frac{1}{2}| \nabla u_{i}|^{2} dx,
\]
among all nonnegative segregated states $u_i \cdot u_j = 0,$ a.e. with the same boundary conditions.
Also in \cite{AB} an alternative proof of uniqueness for limiting case for system(\ref{f20}) is given which is more direct and based on properties of limiting solutions. Although some properties of limiting solution for systems (\ref{s1}) and (\ref{f20}) are similar, the proof of uniqueness for system (\ref{s1}) in the case $\varepsilon$ tends to zero remains challenging problem. \end{rem} Define the energy associated to $m$ densities defined by \begin{equation*}
E(U)=\int_{\Omega} \sum _{ i}|\nabla u_{i}(x)|^{2} dx. \end{equation*} Now consider the following problem \[ \min E(U), \] over the closed but non-convex set \begin{equation*}
S= \left\{ (u_1, \cdots u_m): u _{i} \in H^{1}(\Omega),\, u_{i} \geq 0, \, \prod\limits_{i=1}^{m} u_{i} (x)=0, \, u_{i}|_{\partial\Omega}=\phi_{i} \right\}. \end{equation*}
Existence of a minimizer is direct. The following variation \[ v_{i} = (1 + \varepsilon \varphi_{i}) u_i, \, i=1,\cdots,m, \] with $ \varphi_{i} \in C^{\infty}_{c}(\Omega)$ yields the following \[ u_{i}\ge 0, \quad u_{i}\cdot \Delta u_{i} = 0, \quad \Pi_{j=1}^{m} u_j=0. \] This implies that each $u_i$ is harmonic in its support which dose not hold for our limiting solution. In fact, for system (\ref{s1}) in Theorem 3.2 we show that \[
\prod\limits_{i=1}^{m} u_{i} =0 \quad \textrm{and} \, \Delta u_{i} \, \textrm{is bounded}.
\]
Figure 1 also shows $u_1$ is not smooth in its support $ \Delta u_{i} $ are Dirac measures on interfaces.
\section{Explicit solutions in the limiting case}\label{explicit}
In this section we give an explicit solution and the rate of convergence for the limiting solution of the following system \begin{equation}\label{s113} \left \{ \begin{array}{lll} \Delta u_{i}^{\varepsilon}= \frac{ A_{i} (x) }{\varepsilon} \prod\limits_{j=1}^{m} (u_{j}^{\varepsilon})^{\alpha_j} (x) & \text{ in } \Omega,\\ u_{i} =\phi_{i} & \text{ on } \partial \Omega,
\end{array} \right. \end{equation}
for the cases that $A_{i}(x)$ are the same or $A_i=C_i,$ constants.
\subsection{Construction of Solutions}
It is easy to check that for every $ \varepsilon$ \[
\Delta( u_{1}^{\varepsilon}- u_{i+1}^{\varepsilon}) =0, \quad i=1,\cdots ,m-1
\]
which remains true as $\varepsilon$ tends to zero. First of all define \begin{equation}
w_i = u_1-u_{i+1}, \qquad i=1,\ldots,m-1, \end{equation} then $w_i$ is the harmonic extension of the Dirichlet value
$\phi_1 - \phi_{i+1}.$ This means that $w_i$ for $i=1, \cdots, m-1$ is the solution of
\begin{equation}\label{sunshine3} \left \{ \begin{array}{lll} \Delta w_{i}= 0 & \text{ in } \Omega,\\ w_{i} =\phi_{1}- \phi_{i+1} & \text{ on } \partial \Omega.
\end{array} \right. \end{equation}
Note that the nonnegativity of the $u_i$ is equivalent to $u_1 \geq w_i$. Thus, an obvious candidate solution is given by \begin{equation}\label{m1}
u_1(x) = \max \, \left( \underset{i=1,\ldots,m-1}{\max } w_i(x), 0 \right)\\ \end{equation} and \begin{equation}\label{m2}
u_i = u_1 - w_i, \qquad i=2,\ldots,m. \end{equation} Obviously, by this construction we have $u_i \geq 0$ and moreover
\[
u_1(x) u_2(x) \ldots u_m(x) = 0, \quad \text{ for\, all } \quad x \in \Omega.
\]
To see the latter, let $x$ be fixed and $j$ such that $w_j(x) \geq w_i(x)$ for all $i$. Then
\[
u_j(x) = u_1(x) - w_j(x) = 0.
\]
We finally need to verify $\Delta u_i \geq 0$. For $u_{1}$ this follows from the fact that maximum of harmonic function is subharmonic then for the rest of $u_{i}$ it follows from (\ref{m1}) and (\ref{m2}).
\begin{rem}
Let $v_{i}$ be defined as below : \[ v_i=u_{2}- u_{i}, \] then set \[
u_{2}= \max{\{\max_{i=1,\ldots,m-1} v_i(x), 0}\}.
\]
From this we can recover other components by
\[
u_{i}= v_i-u_{2}, \quad i=1,3,\cdots m. \] One can check this choice gives the same solutions as in (\ref{m1}) and (\ref{m2}), for the case $m=3$ is straightforward. \end{rem}
\subsection{Convergence Rate}
We now turn our attention to a rate of convergence of the solutions as $\varepsilon \rightarrow 0$. Note that \begin{equation}
w_i = u_1^\varepsilon-u_i^\varepsilon, \qquad i=1,\ldots,m \end{equation} is harmonic with Dirichlet data $\phi_1 - \phi_i$, hence coincides with the one in the previous section, in particular independent of $\varepsilon$.
We thus have $$ \Delta u_{1}^{\varepsilon}= \frac{ 1 }{\varepsilon} \prod\limits_{i=1}^{m} u_{i}^{\varepsilon} = \frac{ 1 }{\varepsilon} \prod\limits_{i=1}^{m} (u_{1}^{\varepsilon}-w_i) $$ Now we have $0 \leq u_{1}^{\varepsilon}-w_i$ and $u_{1}^{\varepsilon}-w_i \geq u_1^\varepsilon - u_1$, hence
$$\varepsilon \Delta u_{1}^{\varepsilon} \geq |u_1^\varepsilon - u_1|^m, $$ respectively
$$ \varepsilon \int_\Omega |\nabla (u_1^\varepsilon - u_1)|^2 ~dx + \int_\Omega |u_1^\varepsilon - u_1|^{m+1} ~dx \leq
- \varepsilon \int_\Omega \nabla (u_1^\varepsilon - u_1)\cdot \nabla u_1 ~dx .$$ Applying Young's inequality on the right-hand side we deduce $$ \Vert u_1^\varepsilon - u_1 \Vert_{L^{m+1}(\Omega)} \leq C \varepsilon^{1/(m+1)}. $$
\section{Numerical Study of the Limiting Problem}
This section provides some examples of numerical approximations to the limiting problem of the following
system. \begin{equation}\label{square1} \left \{ \begin{array}{lll} \Delta u_{i}= \frac{1}{\varepsilon} \prod\limits_{j }^{m} \, u_{j} & \text{ in }\ \Omega, \\
u_{i} =\phi_{i} & \text{ on } \ \partial\Omega. \end{array} \right. \end{equation} In our examples we implemented directly mimicking the fixed point technique in the existence proof of Theorem \ref{sun0} with value of $\varepsilon$ and the method in Section 4 which demonstrate those give basically the same as epsilon goes to zero
\begin{exam}\label{E1}
Let $\Omega =B_{1}, m=3.$ The boundary values $ \phi_{i}$ for $ i=1,2,3$ are defined by \begin{equation*} \phi_{1}(1,\Theta)= \left \{ \begin{array}{ll}
|\sin(\frac{3}{2}\Theta)| & 0 \leq\Theta \leq \frac{4\pi}{3},\\ 0 & \text{ elsewhere,} \end{array} \right.
\hspace{0.1in}
\phi_{2}(1,\Theta)= \left \{ \begin{array}{ll}
|\sin(\frac{3}{2}\Theta)| & \frac{2\pi}{3} \leq\Theta \leq 2\pi,\\ 0 & \text{elsewhere.} \end{array} \right. \end{equation*}
\begin{equation*} \phi_{3}(1,\Theta)= \left \{ \begin{array}{lr}
|\sin(\frac{3}{2}\Theta)| & \frac{4\pi}{3} \leq\Theta \leq 2\pi + \frac{2\pi}{3},\\ 0 & \text{elsewhere.} \end{array} \right. \end{equation*} Here the boundary conditions satisfy \[ \phi_{1}\cdot \phi_{2} \cdot \phi_{3}= 0. \]
The surface of $u_1$ is depicted in Figure 1. Also one can check the jump in gradient of $u_1$ along $ \Gamma_{2,3} $
which has shown in part 2 of Lemma (\ref{F3}). In Figure 2,
\begin{figure}
\caption{ surface of $u_1$ }
\label{fig:E}
\end{figure}
\begin{figure}
\caption{surface of $u_1 + u_2 + u_3$.}
\label{fig:E}
\end{figure}
\end{exam}
\begin{exam}\label{farid20} Let $\Omega =[-1,1]\times [-1, 1]$ and $m=4.$ The boundary values $\phi_{i},$ (i=1,2,3,4) are given as follows: \begin{equation*} \phi_{1}= \left \{ \begin{array}{lr} 1-x^{2} & x\in[-1, 1] \ {\&} \ y=1 ,\\ 0 & \ \ \text {elsewhere.} \end{array} \right. \hspace{0.1in}
\phi_{2}= \left \{ \begin{array}{lr} 2(1-y^{2}) & y \in [-1, 1] \ {\&} \ x=1 ,\\ 0 & \ \ \text {elsewhere.} \end{array} \right. \end{equation*}
\begin{equation*} \phi_{3}= \left \{ \begin{array}{lr} 3(1-x^{2}) & x\in [-1 ,1] \ {\&} \ y=-1 ,\\ 0 & \ \ \text {elsewhere.} \end{array} \right.
\hspace{0.1in}
\phi_{4}= \left \{ \begin{array}{lr} 4(1-y^{2}) & y \in [-1 , 1] \ {\&} \ x=-1 ,\\ 0 & \ \ \text {elsewhere.} \end{array} \right. \end{equation*} We implemented the iterative scheme given by Lemma 2.2 with $ \varepsilon=10^{-8}$ and method given by (\ref{m1}) and (\ref{m2}). The obtained solutions are same and the surface of $u_1$ is given in (\ref{fig:EL}). \begin{figure}
\caption{surface of $u_1$.}
\label{fig:EL}
\end{figure} The interfaces are shown in Figure \ref{fig5}.
\begin{figure}
\caption{Free boundary and supports of the components.}
\label{fig:D}
\end{figure}
In Figure (\ref{fig6} we draw the Laplace of $u_1$ on the interfaces. We know Laplace of $u_1$ is Dirac along interfaces so we scaled $\Delta u_1$ by multiplying by mesh size.
\begin{figure}
\caption{ Laplace of $u_1$ as measure(scaled) on the interfaces. The mesh size is $\triangle x=\triangle y=10^{-3}.$}
\label{fig6}
\end{figure} \end{exam}
\begin{exam}\label{farid22} Next, we change boundary values as below. \begin{equation*} \phi_{1}= \left \{ \begin{array}{lll} 1-x^{2} & x\in[-1, 1] \ {\&} \ y=1 ,\\ 1-y^{2} & y\in[-1, 1] \ {\&} \ x=1 ,\\ 0 & \ \ \text {elsewhere.} \end{array} \right. \hspace{0.1in}
\phi_{2}= \left \{ \begin{array}{lll} 2(1-y^{2}) & y \in [-1, 1] \ {\&} \ x=-1 ,\\ 2(1-x^{2}) & x \in [-1, 1] \ {\&} \ y=1 ,\\ 0 & \ \ \text {elsewhere.} \end{array} \right. \end{equation*}
\begin{equation*} \phi_{3}= \left \{ \begin{array}{lll} 3(1-x^{2}) & x\in [-1 ,1] \ {\&} \ y=1 ,\\ 3(1-y^{2}) & y\in [-1 ,1] \ {\&} \ x=1 ,\\ 0 & \ \ \text {elsewhere.} \end{array} \right.
\hspace{0.1in}
\phi_{4}= \left \{ \begin{array}{lll} 4(1-x^{2}) & -1\le x \le 1 \ {\&} \ y=-1 ,\\ 4(1-y^{2}) & y \in [-1 , 1] \ {\&} \ x=-1 ,\\ 0 & \ \ \text {elsewhere.} \end{array} \right. \end{equation*}
The following picture shows the interfaces
\begin{figure}
\caption{Free boundaries.}
\label{fig:EL}
\end{figure}
\end{exam}
\renewcommand{REFERENCES }{REFERENCES }
\end{document} | arXiv |
EURASIP Journal on Audio, Speech, and Music Processing
Musical note onset detection based on a spectral sparsity measure
Mina Mounir1,
Peter Karsmakers2 &
Toon van Waterschoot ORCID: orcid.org/0000-0002-6323-73501
EURASIP Journal on Audio, Speech, and Music Processing volume 2021, Article number: 30 (2021) Cite this article
If music is the language of the universe, musical note onsets may be the syllables for this language. Not only do note onsets define the temporal pattern of a musical piece, but their time-frequency characteristics also contain rich information about the identity of the musical instrument producing the notes. Note onset detection (NOD) is the basic component for many music information retrieval tasks and has attracted significant interest in audio signal processing research. In this paper, we propose an NOD method based on a novel feature coined as Normalized Identification of Note Onset based on Spectral Sparsity (NINOS2). The NINOS2 feature can be thought of as a spectral sparsity measure, aiming to exploit the difference in spectral sparsity between the different parts of a musical note. This spectral structure is revealed when focusing on low-magnitude spectral components that are traditionally filtered out when computing note onset features. We present an extensive set of NOD simulation results covering a wide range of instruments, playing styles, and mixing options. The proposed algorithm consistently outperforms the baseline Logarithmic Spectral Flux (LSF) feature for the most difficult group of instruments which are the sustained-strings instruments. It also shows better performance for challenging scenarios including polyphonic music and vibrato performances.
Musical note onsets can be simply defined as the start of the notes and hence determine the temporal structure of music, but also play an important role in music color (timbre) perception [1]. This makes note onset detection (NOD) one of the most frequently encountered problems in the arising research fields of machine listening, music processing, and music information retrieval. It can be formulated as an acoustic event detection problem with the signal under processing being a piece of music and the events being the note onsets.
Despite being a long-standing problem, literature does not offer a single but rather different definitions for note onsets. From a music performer's perspective, it is the time instant at which the performer plays the note. Alternatively, when a note is decomposed into a transient (attack and decay) followed by a steady-state (sustain and release) component [2], an onset can be defined as the start of the note's attack [3]. Different definitions have led to different (often manual) labeling methods for onset ground truth generation, used in NOD algorithm design and evaluation. While the onset definition of [3] is the key behind the proposed NOD feature, the definition found in [4] considering an onset as the first detectable part of a note in an isolated recording will be considered in this paper for labeling the datasets used for tuning and testing NOD methods based on the proposed and state-of-the-art features.
Note onset detection is increasingly gaining research interest due to its usefulness in many music-related activities and applications: automatic music transcription [5, 6], audio-to-score (audio-to-symbolic representation) alignment [7], music analysis (tempo and beat tracking) [8] [9] and synthesis (enhancement of the attacks of synthesized notes) [10], instrument identification [1], and adaptive audio effects [11], e.g., applying different time stretching for transient and steady-state components of a note. Moreover, onsets are of great importance for music search engines and recommender systems when used as acoustic meta-data about songs [12, Ch. 3].
A general scheme for NOD has been summarized in [3] where the main component is the onset detection function (ODF). The ODF describes the variation of an NOD feature over time and typically represents a highly sub-sampled version of the original music signal from which onsets should be easily detectable as distinctive amplitude peaks [4]. In literature, two main groups of NOD methods can be distinguished: data-driven and non-data-driven methods. Data-driven methods build statistical models for note onsets by employing machine learning methods on a large set of training data. For instance, learning a probabilistic (hidden Markov) model exploiting the rhythmic regularity in music [13], training a neural network (recurrent [14], convolutional [15–18]), or learning dictionaries describing onset/non-onset patterns [7]. These machine learning methods can either solve a classification problem, differentiating onsets from non-onsets, or a regression problem of which the output is then used to estimate a suitable ODF. On the other hand, in non-data-driven methods, the ODF is directly calculated from the analyzed signal or its extracted features. Even though data-driven methods have been shown to slightly outperform non-data-driven methods on the Music Information Retrieval Evaluation eXchange (MIREX) dataset [19], the former need to be trained on large annotated datasets in order to be generically applicable, which is currently impractical as most of the available datasets require manual annotation. Moreover non-data-driven methods are sometimes preferred over data-driven methods as the former allow more easily to find direct relations with the signal's musical properties compared to the latter.
Non-data-driven NOD methods [2] may operate in the time domain or frequency domain [20] and differ in the type of features extracted from the analyzed signal: magnitude, phase, energy, power spectral density, or time derivatives of these [1]. Many non-data-driven methods exist, see [3] for an early overview: envelope follower, high frequency content (HFC), spectral difference, phase deviation, etc. For a more recent overview, we refer to MIREX [19], where different methods are submitted and evaluated on the same dataset. According to the MIREX results [19], the state-of-the-art non-data-driven method employs the Complex Flux feature [21] which is based on the Super Flux feature [22] taking also phase information into consideration. Both features were proposed to tackle a specific but widely occurring problem (i.e., robustness to vibrato and tremolo effects) and share the basic properties of the Logarithmic Spectral Flux (LSF) feature [20], which in turn is based on the spectral dissimilarity or Spectral Flux (SF) feature proposed earlier in [1] but includes additional pre- and post-processing. This family of features results in an ODF that tracks the temporal evolution of the short-time spectrum based on first-order differences between spectral features in successive signal frames. Several enhancements were proposed to this family, for example adaptive whitening [23], to counter the effect of spectral roll-off. Another related NOD method is the constrained linear reconstruction method [24] which instead of calculating the first-order spectral differences, aims to minimize the spectral difference between the current frame and a group of previous frames and uses the residual as an ODF. These variations add some computational complexity with a slight performance improvement. As all these methods rely on differences between successive frames, the performance of these methods may drop when comparing successive notes sharing a considerable number of harmonics or with repeated notes having insufficient increase in magnitude at their onset. Finally, a more recent approach to NOD using pitch tracking was presented in [25] and performs better with pitched non-percussive instruments.
This paper provides a detailed treatment of a new non-data-driven NOD method based on the NINOS2 feature that was proposed earlier by the authors in the context of guitar note onset detection [26]. The NINOS2 feature represents a (normalized) sparsity measure of the analyzed signal magnitude spectrum. Even though spectral sparsity implicitly contributes to the operation of the previously discussed non-data-driven methods by differentiating between notes' transient and steady-state components, it has not been explicitly investigated. Earlier research exploiting signal sparsity in the context of NOD has not resulted in a sparsity measure that can directly be used as an ODF, but has rather focused on the estimation of a sparse signal representation, e.g., by matching pursuit [27] or sparse decomposition [28], from which a suitable ODF is then calculated. In our previous work [26], we focused on guitar notes and chord progressions. Here instead we show and analyze results covering a wider range of instruments and playing styles. Moreover, we provide an in-depth analysis of the NINOS2 feature in terms of its sparsity properties, propose two new and improved variants of the feature, compare its computational complexity to the baseline non-data-driven feature, and show its superior performance in challenging NOD scenarios involving polyphonic music, repeated notes, and vibrato effects. Finally, this paper also provides a more detailed explanation of the novel annotation approach introduced in [26], in which semi-synthetic music excerpts are generated and automatically annotated out of isolated musical note recordings.
Having introduced the problem, the related work, and challenges, Section 2 will discuss in further detail the related non-probabilistic methods and their general solution steps. Section 3 provides an in-depth treatment of the NINOS2 feature and the resulting novel NOD method. The experimental evaluation is shown in Section 4 comparing NOD results with the NINOS2 and LSF features. Finally, Section 5 presents the conclusion and hints for future work.
The majority of non-data-driven NOD methods follow a general scheme that could be divided into three steps, see Fig. 1: pre-processing, reduction, and peak picking [3]. In the pre-processing step, the signal is processed in order to emphasize features related to onsets or to remove noise, which makes the detection easier. The second and most important step is the reduction step in which the ODF, sometimes alternatively named novelty function, is computed based on extracted signal features. The resulting ODF is then run through the peak-picking step, in which distinct points marking the estimated onset times are selected from the ODF. This latter step is sometimes referred to as the Onset Selection Function (OSF), and its accuracy highly depends on the quality of the ODF in terms of the presence of noise and artifacts and in terms of the presence and waveform shape of ODF peaks at onset locations [3, 14]. Several OSF approaches have been proposed, either with fixed or adaptive peak picking thresholds, and their impact on the overall onset detection performance has been experimentally assessed [29].
Solution scheme for NOD. An example of the output of the different steps is shown when NOD is applied on an input music excerpt
As stated previously, the baseline non-data-driven methods are operating on spectral features of the input signal in order to detect spectral dissimilarities at note onsets. Their respective ODFs are calculated as the first-order difference of the spectral magnitude over successive frames, with some enhancements like adding phase information or adaptive whitening. The SF ODF is defined as followsFootnote 1,
$$ \text{SF} \left(n \right) \ = \ \sum_{k=1}^{\frac{N}{2} - 1} \left\lbrace H \left(\lvert X_{k} \left(n\right) \rvert - \lvert X_{k} \left(n-1\right) \rvert \right) \right\rbrace, $$
where H is the half-wave rectification operator keeping track only of positive changes and neglecting the negative ones representing offsets,
$$ H\left(x \right) \ = \ \frac{\left(x+ \lvert x \rvert \right)}{2}, $$
Xk(n) is the N-point short-time Fourier transform (STFT) at frame index n and frequency k of the windowed time-domain input signal x(n),
$$ X_{k}(n) \ = \ \sum_{m=-\frac{N}{2}}^{\frac{N}{2}-1} w(m)x(n+m)e^{-\frac{2j\pi mk}{N}}, $$
and w(m) is a length-N window function. In order to make the ODF invariant to signal scaling, in [30], the first-order difference was applied to the log-magnitude spectrum which is equivalent to the logarithm of the magnitude spectrum ratio between successive frames. In other words, instead of using the STFT magnitude coefficients |Xk(n)| in (1), the STFT log-magnitude coefficients were used, defined as,
$$ Y_{k} \left(n\right) = \log(\lambda \lvert X_{k} \left(n\right) \rvert + 1)\ , $$
where λ is a compression parameter and a constant value of 1 is added to ensure a positive value for the logarithm. The resulting LSF ODF is defined as
$$ \textrm{LSF} \left(n \right) \ = \ \sum_{k=1}^{\frac{N}{2} - 1} \left\lbrace H \left(Y_{k} \left(n\right) - Y_{k} \left(n-1\right) \right) \right\rbrace, $$
This ODF generally gives better results compared to the SF ODF. We use the LSF ODF for comparison with the proposed method as it is generally considered the baseline non-data-driven method [20]. Optionally, a filterbank according to the Western music scale can be applied as a pre-processing step before the LSF ODF calculation [20].
Proposed note onset detection method
Idea: transient vs. steady-state components
A musical note can be considered as a signal consisting of a transient component followed by a steady-state component, where each component can be expressed as a sum of sinusoids. From a sinusoidal modeling perspective, the key difference between the transient and steady-state components of a note is the number of sinusoids needed to represent each. In fact, a transient requires a much larger number of sinusoids than a steady-state component to be accurately approximated. For some percussive instruments, the attack is nearly an impulse signal which conceptually corresponds to the sum of an infinite number of sinusoids. On the other hand, for non-percussive instruments, the transient is stretched over a longer time interval in which it typically also exhibits a broadband behavior which again leads to a representation requiring many sinusoids. This observation also follows from the definition of transients as short time intervals in which the statistical and energy properties of the signal change rapidly [2]. Consequently, the magnitude spectrogram of a musical note shows transients (attacks) that are spectrally less sparse than the subsequent steady-state (tonal) part. Spectral sparsity is considered here along the vertical (frequency) dimension of a magnitude spectrogram and indicates if few or many sinusoids are needed to represent the signal in that particular time frame.
In the following three subsections, we will explain how this idea of spectral sparsity can be realized in the frame of the three-step non-data-driven NOD scheme introduced earlier. At this point, let us start by defining the input to the first step of this scheme. First, the analyzed music signal is divided into overlapping windowed frames and the STFT is computed as in (3). The STFT log-magnitude coefficients Yk(n) are then calculated using (4), i.e., in the same manner as for the LSF ODF.
Pre-processing: coefficients subset
Before calculating the NINOS2 ODF, we apply a pre-processing step to maximize the differentiation between the STFT log-magnitude spectra of frames containing onsets and other frames in terms of spectral sparsity. Instead of measuring the spectral sparsity of the entire STFT log-magnitude spectrum of each frame, only a subset of the STFT log-magnitude coefficients will be used. This is motivated as follows.
For the majority of pitched musical instruments, when looking at the magnitude spectrogram of an isolated note recording, it can be observed that the frequency components corresponding to the note's fundamental frequency and harmonics exhibit high energy during transients followed by a slow energy decrease after the transient. On the other hand, the remaining frequency components, i.e., those not related to the note's fundamental frequency or harmonics, show a remarkable increase in energy during transients followed by a relatively fast energy decay, i.e., faster than the post-transient energy decay of the harmonics.
To illustrate this observation, Figs. 2–4 compare the time variation over 600 signal frames of the energy per frequency bin, after splitting the frequency bins for each time frame into two subsets: one subset containing the frequency bins with low energy (LE Bins, upper subplots in Figs. 2–4) and another subset containing the frequency bins with high energy (HE Bins, middle subplots in Figs. 2–4). This is shown in Figs. 2–4 for three different instruments: electric guitar (Fig. 2), cello (Fig. 3) and trumpet (Fig. 4). It can be clearly observed that the energy variation in the LE Bins exhibits more pronounced peaks in frames containing onsets (marked by the vertical green lines in Figs. 2–4), compared to the energy variation in HE Bins. While the energy in HE Bins does not always increase when reaching an onset, the energy in LE Bins does. Apart from this, it can also be observed that the shape of the energy decay curve after an onset seems to be characteristic for the type of instrument. For instance, the LE Bin energy curves for the electric guitar and the cello do not show an explicit sustain component for their notes while the same curves for the trumpet do show a clear distinction between the transient (attack and decay) and steady-state (sustain and release) components. We can also notice, again from the LE Bin energy curves, that the electric guitar has a faster attack and decay differentiating it from the other two instruments shown here. Finally, the lower subplots for each instrument in Figs. 2–4 show the time variation of the ℓ1-norm of the vector of STFT log-magnitude coefficients, again separated into the subsets of HE Bins and LE Bins. Indeed, the ℓ1-norm is one of the simplest sparsity measures [31] and illustrates in these examples that spectral sparsity may be used to detect onsets when applied to the subset of LE Bins.
Temporal variation of signal energy per frequency bin and of spectral sparsity for electric guitar (major seventh stopped) excerpt. Note onsets are indicated by vertical lines. (Top and middle) Low- and high-energy log-magnitude spectrograms. (Bottom) Low- and high-energy STFT log-magnitude coefficient vector ℓ1-norm variation
Temporal variation of signal energy per frequency bin and of spectral sparsity for cello (non-vibrato) excerpt. Note onsets are indicated by vertical lines. (Top and middle) Low- and high-energy log-magnitude spectrograms. (Bottom) Low- and high-energy STFT log-magnitude coefficient vector ℓ1-norm variation
In summary, removing high-energy STFT log-magnitude coefficients before calculating a spectral-sparsity-based ODF, thus neglecting coefficients corresponding to fundamental frequencies and harmonics, will enhance the discriminative power of the ODF. It is important to emphasize how fundamentally different this pre-processing step is from existing NOD methods. Indeed, pre-processing often emphasizes signal components in HE Bins, considering components in LE Bins to be less relevant for NOD.
The frequency bin subset selection is implemented as follows: the N/2−1 STFT log-magnitude coefficients Yk(n), k=1,…,N/2−1 for frame n are sorted in ascending order and only the first J out of N/2−1 coefficients are used afterwards, with
$$ J = \left\lfloor \frac{\gamma}{100} \left(\frac{N}{2}-1\right) \right\rfloor, $$
where ⌊·⌋ denotes the floor function and γ represents the percentage of (one-sided) STFT frequency bins to be contained in the LE Bins subset. These J coefficients are collected in a length-J vector y(n) and their frequency bin indices are collected in the set \(\mathcal {I}_{\text {LE}}(n)\), i.e.,
$$ \mathbf{y}(n) = \left[ Y_{k}(n) \right],\ k \in \mathcal{I}_{\text{LE}}(n). $$
ODF: spectral sparsity feature
We now provide a detailed derivation of the NINOS2 feature and its resulting ODF, starting from the idea of spectral sparsity introduced in Section 3.1. First, after having collected the STFT log-magnitude coefficients of the LE Bins subset during pre-processing, a spectral sparsity feature denoted in [26] as Identifying Note Onsets based on Spectral Sparsity (INOS2) is introduced. The INOS2 feature is actually an inverse-sparsity measure as to yield large values for non-sparse frames, thus highlighting possible onset locations in time. Considering the INOS2 feature per frame n, the INOS2 ODF is obtained,
$$ \Upsilon_{\ell_{2}\ell_{4}}(n) = \frac{\lVert \mathbf{y}(n) \rVert^{2}_{2} }{\lVert \mathbf{y}(n) \rVert_{4}} = \frac{\sum_{k \in \mathcal{I}_{\text{LE}}(n)} Y_{k}^{2}(n)}{\left(\sum_{k \in \mathcal{I}_{\text{LE}}(n)} Y_{k}^{4}(n)\right)^{\frac{1}{4}}}. $$
We will first argue why the INOS2 feature can indeed be interpreted as an inverse sparsity measure, before considering how it could be further improved for the purpose of NOD. Even though sparsity is usually defined as the number of non-zero elements in a vector, which corresponds to its ℓ0-pseudonorm, there is no universal consensus for defining and measuring sparsity [31]. When measuring sparsity of signal vectors that contain noise or other artifacts, the ℓ0-pseudonorm becomes useless since it cannot discriminate between small ("near-to-zero") and large ("far-from-zero") elements in the vector. Two basic conditions that should be satisfied by any sparsity measure have been stated in [31]: the most sparse signal is the one having all its energy concentrated in one sample while the least sparse is the one having its energy uniformly distributed over all its samples. For example, according to this definition, a measure S that satisfies the following inequalities for vectors of equal length would indeed be considered a sparsity measure,
$$\begin{array}{*{20}l} S_{max} &= S([0,0,0,0,1])> \dots >S([0,0,1,1,1]) \\ & > \dots >S([1,1,1,1,1]) = S_{min}. \end{array} $$
where S is a sparsity measure defined for vectors of equal length. Using this dummy example, it can be verified that the INOS2 feature in (8) is indeed an inverse sparsity measure. However, this observation only holds if the vectors in comparison have similar energies. For instance, according to the INOS2 feature, the vector v=[0,0,100,100,100] is less sparse than the vector [1,1,1,1,1], which clearly contradicts with the basic conditions for a sparsity measure given above. The explanation for this behavior is that the INOS2 feature is in fact a joint energy and inverse sparsity measure. This becomes clear when rewriting (8) as follows,
$$ \Upsilon_{\ell_{2}\ell_{4}}(n) = \lVert \mathbf{y}(n) \rVert_{2} \cdot \frac{\lVert \mathbf{y}(n) \rVert_{2} }{\lVert \mathbf{y}(n) \rVert_{4}}. $$
The first factor ∥y(n)∥2 is the vector ℓ2-norm which directly relates to the signal frame's energy. As onsets are usually accompanied by an increase in energy, see, e.g., Figs. 2–4, this property of the INOS2 feature is actually desirable for NOD. In fact, the use of a pure energy measure was proposed as one of the earliest features for NOD, resulting in an ODF known as the envelope follower [1]. The second factor in (10), i.e., the ratio between the signal frame's ℓ2-norm and ℓ4-norm, can be understood to measure inverse sparsity by applying the unit-ball concept [2] as shown in Fig. 5.
Temporal variation of signal energy per frequency bin and of spectral sparsity for trumpet (Bach) excerpt. Note onsets are indicated by vertical lines. (Top and middle) Low- and high-energy log-magnitude spectrograms. (Bottom) Low- and high-energy STFT log-magnitude coefficient vector ℓ1-norm variation
2-D unit-ball illustration of the relation between the ℓ2-norm and ℓ4-norm
The figure shows a 2-D coordinate system where each point is representing a vector \(\mathbf {v} \in \mathbb {R}^{2}\). Focusing on the first quadrant, it can be noticed that points near the X or Y axis, represent sparse vectors and inversely, while moving away from the axes towards the 45∘ line (e.g., moving away from the X axis in the direction of the (solid) red arrow), vectors get less sparse. By applying the basic conditions for a sparsity measure given above, the most sparse vectors correspond to points lying on the axes, e.g., [0,1] and [1,0], while points lying on the ±45∘ lines correspond to the least sparse vectors, e.g. \([ \sqrt {0.5},\sqrt {0.5}]\) lying on the unit circle. Now, let us consider the relation between ℓp-norms for different p in this 2-D coordinate system. The unit circle represents the ℓ2-norm unit ball, i.e., the set of points representing vectors having an ℓ2-norm equal to 1. For each of the points on the ℓ2-norm unit ball, we can graphically evaluate the ℓ4-norm by looking at the ℓ4-norm unit ball and its scaled versions each containing one of the points. We observe that the scaled ℓ4-norm ball gets smaller, shrinking along the dashed red arrow, while considering a point moving along the (solid) red arrow. This means the ratio of the ℓ2-norm and the ℓ4-norm increases as the vector becomes less sparse.
Apart from the two basic conditions discussed above, a number of additional desirable properties of a sparsity measure have been proposed in [31]. These properties and their interpretation for a sparsity-based NOD feature are briefly summarized here:
Robin Hood: Taking from the richest (highest-energy signal samples) and giving to the poor (lowest-energy signal samples) decreases the sparsity of a signal. Considering the analogy with spectral sparsity in the NOD context, the transient component of a note represents a scenario with many poor where energy is fairly distributed, after which the energy shifts to the harmonics in the steady-state component which represents a scenario with few rich.
Scaling: Sparsity is independent of signal scaling, e.g., S([0,0,1,1,1])=S([0,0,100,100,100]). This is an important property in terms of decoupling an NOD feature into an energy measure and sparsity measure, as discussed above when factorizing the INOS2 feature as in (10).
Rising Tide: Adding a constant value to each of the signal samples decreases sparsity.
Cloning: Sparsity measures should preferably assign the same value to cloned signals, i.e., \(S([0, 1]) = S([0, 0, 1, 1]) = S([0, 0, 0, 1, 1, 1] = \dots)\). As will be discussed below, this property is relevant to reduce the amount of computations needed for the NOD feature calculation and to make the resulting ODF insensitive to the use of frames with different frame lengths.
Bill Gates: As an individual gets richer, sparsity increases. In the context of NOD, as the steady-state component of a note shows a more pronounced tonal behavior, i.e., the more its energy is concentrated in a few frequency components, it will be easier to discriminate the transient from the steady-state component using a spectral sparsity
Babies: Appending zeros to a signal increases its sparsity.
The ratio of a signal vector's ℓ4-norm and ℓ2-norm, i.e., the inverse of the second factor of the INOS2 feature in (10), can be understood to satisfy all of the above properties except for the cloning property. In other words, this ratio is sensitive to changes in the vector length, i.e., J in the context of the INOS2 feature. This is an undesirable property for an NOD feature for several reasons. Firstly, it makes the choice of a detection threshold and peak-picking parameters dependent on the frame size N and pre-processing parameter γ. Secondly, it makes the feature incompatible with a processing strategy in which successive signal frames may have different lengths to achieve detections with different time resolutions (which is however outside the scope of this paper). Finally, it would yield different feature values for the cases that the full frequency grid or only the positive frequency grid is used in the feature calculation, as briefly touched upon earlier [1].
For these reasons, the INOS2 feature can be improved by normalizing the inverse-sparsity factor in (10) such that it yields a value ∈[0,1] independently of the length J of the vector y(n). Let us denote the inverse-sparsity factor in (10) as \(\mathcal {S}(n)= \lVert \mathbf {y}(n) \rVert _{2} / \lVert \mathbf {y}(n) \rVert _{4}\) and its (theoretically achievable) minimum and maximum value as \(\mathcal {S}_{\min }\) and \(\mathcal {S}_{\max }\), respectively. A normalized version of \(\mathcal {S}(n)\), i.e., \(\mathcal {\bar S}(n) \in [0,1]\) can then be obtained as follows,
$$ \mathcal{\bar S}(n) = \frac{\mathcal{S}(n) - \mathcal{S}_{\min}}{\mathcal{S}_{\max}-\mathcal{S}_{\min}}. $$
The values \(\mathcal {S}_{\min }\) and \(\mathcal {S}_{\max }\) can be found by considering two extreme cases of an arbitrarily scaled length-J vector: the sparsest possible vector \(\left [ a,0,0,\dots,0\right ]\) having \(\mathcal {S}(n) = 1 \triangleq \mathcal {S}_{\min }\) and the least sparse vector \(\left [ a,a,a,\dots,a\right ]\) having \(\mathcal {S}(n) = \sqrt [\leftroot {-2}\uproot {3}4]{J} \triangleq \mathcal {S}_{\max }\). By substituting these values in (11) we obtain that
$$ \mathcal{\bar S}(n) = \frac{\mathcal{S}(n) -1}{\sqrt[\leftroot{-2}\uproot{3}4]{J} -1} = \frac{\frac{\lVert \mathbf{y}(n) \rVert_{2}}{\lVert \mathbf{y}(n) \rVert_{4}} -1}{\sqrt[\leftroot{-2}\uproot{3}4]{J} -1}. $$
Combining this normalized inverse-sparsity factor with the energy factor in (10) finally results in the normalized version of the INOS2 feature denoted as NINOS2. Considering the NINOS2 feature per frame n, the NINOS2 ODF is obtained,
$$ \aleph_{\ell_{2}\ell_{4}}(n) \ = \frac{\lVert \mathbf{y}(n) \rVert_{2}}{\sqrt[\leftroot{-2}\uproot{3}4]{J} -1} \left(\frac{\lVert \mathbf{y}(n) \rVert_{2}}{\lVert \mathbf{y}(n) \rVert_{4}} -1 \right). $$
This definition of the NINOS2 ODF slightly differs from the original definition in [26] because of the different normalization procedure. The two definitions deviate in particular for relatively small values of J, and only the definition proposed here in (13) guarantees that the inverse-sparsity factor \(\mathcal {\bar S}(n) \in [0,1]\).
Finally, in this paper, we also propose a new sparsity-based NOD feature, starting from the observation that the above unit-ball demonstration can be applied to any ratio of norms, ∥y(n)∥p/∥y(n)∥q, where p<q. A particularly interesting choice is p=1,q=2, since in this case the energy factor ∥y(n)∥2 cancels out with the denominator of the inverse sparsity factor ∥y(n)∥1/∥y(n)∥2, and the joint energy and inverse sparsity measure reduces to the ℓ1-norm, i.e., (compare to (10)),
$$ \Upsilon_{\ell_{1}}(n) = \lVert \mathbf{y}(n) \rVert_{2} \cdot \frac{\lVert \mathbf{y}(n) \rVert_{1} }{\lVert \mathbf{y}(n) \rVert_{2}} = \lVert \mathbf{y}(n) \rVert_{1}. $$
When applying the same normalization strategy as above, the ℓ1-norm version of the NINOS2 ODF is readily obtained,
$$ \aleph_{\ell_{1}}(n) \ = \frac{\lVert \mathbf{y}(n) \rVert_{2}}{\sqrt{J} -1} \left(\frac{\lVert \mathbf{y}(n) \rVert_{1}}{\lVert \mathbf{y}(n) \rVert_{2}} -1 \right). $$
However, by comparing the expressions in (14) and (15), it is clear that the non-normalized expression is much cheaper to compute due to the cancelation of the ℓ2-norm in (14). Therefore, we will further consider the INOS2 (ℓ1) ODF in (14) as a computationally appealing alternative to the NINOS2 (ℓ2ℓ4) ODF in (13).
Peak-picking
To allow for a fair comparison of the proposed ODF with the state of the art, the same peak-picking procedure as used with the baseline non-data-driven NOD method in [21] is adopted here. According to this procedure (shown here for the NINOS2 (ℓ2ℓ4) ODF and similar for the other ODFs), an onset is detected in the nth signal frame if all of the following conditions are satisfied, (1) \(\aleph (n) = \max _{l} \ \aleph (n+l), \quad \text {with} \quad l = -\alpha,\dots, +\beta \), (2) \(\aleph (n) \geq \ \frac {1}{a+b+1} \sum _{l=-a}^{+b} \aleph (n+l) + \delta \), (3) n−p>Θ,
where α, β, a, b, and Θ are the peak-picking parameters defined using the following terminology, before maximum, after maximum, before average, after average, and combination width, all counted in frame units, and p is the frame index of the previously detected onset. The interpretation of the above conditions is that (1) an onset should correspond to the highest ODF amplitude in the neighborhood of the frame index n, (2) an onset's ODF amplitude should be an amplitude offset δ above its neighborhood average ODF amplitude, and (3) an onset should be Θ frames apart from the closest preceding onset.
While the peak-picking parameter values are kept the same as in [21], the combination width Θ is set equal to the detection window length which is the maximum number of frames in which a single ground-truth onset could occur. This value depends on the frame overlap and is calculated using the following relations:
$$\begin{array}{*{20}l} h &= \lfloor (1-q) N \rceil, \end{array} $$
$$\begin{array}{*{20}l} r &= f_{s} / h, \end{array} $$
$$\begin{array}{*{20}l} \Theta &= \lceil r N / f_{s} \rceil, \end{array} $$
where ⌊·⌉ and ⌈·⌉ denote the nearest integer and ceiling functions, h is the frame hop size in samples, q∈[0,1] is the frame overlap factor, N is the frame size in samples, r is the frame rate, fs is the sampling frequency, and Θ as defined above is the number of frames to be skipped after one onset detection before aiming to detect a new onset. It is advisable to choose the value of Θ greater than the value given in (18) in case of instruments with very long attacks.
Onset detection parameters
For a complete understanding of the new non-data-driven NOD method based on the (N)INOS2 feature and of the performance measures explained further, the most important parameters impacting the method's NOD performance are summarized here:
Processing frame size (N): It should be larger than a single period of the signal [1] yet small enough to capture transients.
Detection resolution: It depends on the frame rate r which is inversely proportional to the hop size h.
Processing mode: The detection algorithm could be run in either offline or online mode. In the latter, the peak-picking parameters β and b are set to zero.
Evaluation window: It is used to increase the detection window—implementing an onset ground-truth—to handle the lack of precision inherent in the onset annotation process, i.e., onsets may occur slightly before or after the annotated ground-truth onset.
Experiment setup and evaluation
As mentioned earlier, the majority of NOD results found in literature are obtained by testing on manually annotated datasets. A manual annotation process consists in asking music experts to listen to a number of music excerpts and to inspect their waveforms and spectrograms in order to find the onset times. The final annotation is obtained by averaging the different experts' onset times. Some NOD results reported in literature are instead obtained by testing on automatically annotated datasets [26, 32]. Various ways of achieving automatic annotation have been proposed [33, 34], e.g., by using a musical instrument equipped with electromechanical sensors or by synthesizing music signals starting from isolated note recordings. The choice to test NOD methods with either manually or automatically annotated datasets depends on several factors. The main advantage of manual annotation over automatic annotation is that manually annotated onsets represent perceived note onsets rather than onsets based on some non-perceptual detection threshold. This advantage however only holds to some extent, as manual annotation is often based on audiovisual rather than purely auditive inspection of an audio file. Arguments in favor of automatic annotation are its ease of deployment in generating large annotated datasets, as manual annotation is time-consuming and labor-intensive, and its potential to yield objective and systematic results independent of manual annotation errors or subjective differences across human annotators.
With these arguments in mind, in this paper, we use automatically-annotated semi-synthetic datasets. To this end, a Matlab tool called "Mix Notes" has been developed by the authors to generate music excerpts from isolated note recordings together with their respective ground-thruth onset annotationFootnote 2. The tool loads isolated note and/or chord recordings from a database and then automatically determines the ground-truth note onsets (and offsets) based on a short-term energy measure. This can indeed be done easily and accurately when notes are individually recorded and avoids the dependency of the annotation on the musical note context. Here, we used a simple energy measure: the onset is chosen to be the earliest point in time no at which the absolute value of the signal amplitude |x(no)| satisfies that
$$ x(n_{o}) \geq \frac{\rho}{100} \cdot \max_{n} \ |x(n)|, \quad 0 <\rho \ll 100, $$
where ρ is a percentage of the note's highest amplitude and should define the amplitude threshold of a just audible sound. The tool then generates a melody by automatically mixing the selected note recordings while imposing a specified minimum time spacing between successive onsets. An annotation file for the generated melody is automatically created using the isolated note annotations with the timing information from the mixing process.
For the evaluation of the NOD methods, in this paper, we have created an automatically annotated semi-synthetic dataset using the MixNotes tool with isolated note recordings of various instruments and playing styles, starting from the recordings in the McGill University Master Samples (MUMS) library [35]. Before annotating the isolated note recordings and generating the semi-synthetic music excerpts, all files in the MUMS library were checked for compatibility with the NOD problem. The files were manually checked and retained only when containing just one instance of the respective note. In summary, the harp was excluded as its recordings were mostly composed of many notes, e.g., a glissando recording, in one file. All the percussions' rolling recordings and the library folder "Percussions patterns and dance" were excluded for the same reason. Moreover, the skins and metals percussions folders were each divided into 3 folders—respecting the alphabetical order of file names—as these contain many more note recordings than other instruments. This resulted in 138 folders containing different instruments with different playing styles. For a more structured performance evaluation, the folders were grouped using a similar grouping as used in the MIREX NOD dataset [19]. The grouping scheme is shown in Table 1 using similar naming as the MUMS folder names. The bowed vibraphone instrument is excluded from the grouping as it does not fit in any of the groups.
Table 1 Instrument grouping
The proposed and baseline non-data-driven NOD methods are evaluated on these 138 folders covering a large number of instruments (electric, acoustic, wind, brass, strings,percussions, etc.) and playing styles (notes, chords, pizzicato, sul ponticello, bowed, screams, flutter, etc.). For each instrument and playing style, i.e., each separate folder, three test melodies are generated, each consisting of a sequence of 50 randomly selected notes. The first melody is used for tuning the NOD algorithm parameters whereas the other two are used for performance evaluation. The time spacing between two notes in each generated melody is randomly chosen from a uniform distribution on the interval [5250,44100] samples which, at a sampling frequency fs=44.1 kHz, corresponds to [0.12,1] s. These values were chosen to limit the onsets per minute to 500, which is slightly above the maximum onsets per minute of an average pop song with a tempo of 120 beats per minute and 4 notes per beat, i.e., 16th notes.
The above dataset generation procedure is repeated four times to produce four datasets with distinct properties relevant to NOD performance evaluation:
M dataset: monophonic mixing.
MR8 dataset: monophonic mixing with forced repeated notes where every note is repeated 8 times.
P dataset: polyphonic mixing.
PR8 dataset: polyphonic mixing with forced repeated notes where every note is repeated 8 times.
The M and P datasets (and equivalently the MR8 and PR8 datasets) are composed of the same note sequences with the same onset times and time spacings. However, in the M and MR8 datasets, an exponential amplitude decay is applied to the sustain part of the notes, such that each note has faded out before the next note starts, effectively forcing a monophonic music excerpt. In the P and PR8 datasets, successive notes are allowed to overlap in time. The use of repeated notes in MR8 and PR8 is mainly to assess the algorithms' performance in detecting onsets of successive notes that are sharing a considerable number of harmonics with insufficient increase in magnitude at their onset. These latter datasets are expected to be challenging for the baseline algorithm based on spectral differences.
An important issue when assessing detection performance is to decide how true positives and negatives are counted. Here, we adopt the same approach as used in the evaluation of the state-of-the-art method [22], where two onsets detected within one detection window are counted as one true and one false positive, and a detected onset can count only for one detection window.
The most common way to compare NOD methods is by evaluating their relative F1-scores. The F1-score is defined as the harmonic mean of the precision (P) and the recall (R) given by:
$$ F1 = \frac{2 P R}{P+R}, $$
with the precision being the ratio of correctly detected onsets ("true positives") to the total number of onsets under test, and the recall comparing the number of correctly detected onsets to the total number of points detected as onsets.
Since a true positive could occur anywhere within the detection window, a new measure is developed to determine how large the detection window should be in order to achieve the reported F1-score. For each true positive in the music excerpt under test, the time difference of the ground-truth onset frame to the first frame that is covered by the detection window is saved, and then the Detections standard deviation σd, i.e., the standard deviation of the time difference of the ground-truth onset to the start of the detection window, is computed for each music excerpt used in the evaluation. When σd is small, the algorithm will be detecting onsets in the same relative position to the start of the detection window. This is an important measure that reflects how well the algorithm detects the different onset times relative to each other.
Computational complexity and memory usage
The running time of a non-data-driven NOD method is comparable when using the LSF or NINOS2 ODFs and considerably shorter when using the INOS2 (ℓ1) ODF. Whereas the computation of the NINOS2 feature is slightly more expensive than the LSF feature, mainly due to the required pre-processing, the opposite is true for the memory usage as the NINOS2 feature does not require any information from the previous frame. Here, we compare the type and number of operations and the amount of memory needed for the computation of the three features, omitting the calculation of the STFT log-magnitude coefficients and the peak picking which are shared by all methods. Note that for all features we only use half minus one, i.e., \(\frac {N}{2}-1\), of the FFT coefficients.
The number of operations and memory positions needed for the LSF feature calculation is
$$\begin{array}{*{20}l} \mathcal{C}_{\textrm{LSF}} &= \left(\frac{N}{2}-1\right) Sub + \left(\frac{N}{2}-2\right) Add \\ & + \left(\frac{N}{2}-1\right) Comp\\ \mathcal{M}_{\textrm{LSF}} &= N-2 \end{array} $$
with Sub, Add, and Comp representing scalar subtractions, additions, and comparisons respectively. Here, the comparisons are needed for the half-wave rectification. The number of operations and memory positions needed for the (N)INOS2 feature calculation is (with \(J < \frac {N}{2}-1\), see (6)),
$$\begin{array}{*{20}l} \mathcal{C}_{{\text{NINOS}_{2}}(\ell_{2}\ell_{4})} &= (2J+1) Mult + 2(J-1) Add + 3\ Sqrt\\ & + 1\ Sub + 1\ Div \\ & + \left(\frac{N}{2}-1\right) \log\left(\frac{N}{2}-1\right) Comp\\ \mathcal{C}_{{\textrm{INOS}_{2}}(\ell_{1})} &= (J-1) Add \\ & + \left(\frac{N}{2}-1\right) \log\left(\frac{N}{2}-1\right) Comp\\ \mathcal{M}_{{\textrm{(N)INOS}_{2}}} &= \frac{N}{2}-1 \end{array} $$
with Mult, Sqrt, and Div representing scalar multiplications, square roots, and divisions, respectively. Here, the comparisons are needed for the frequency-bin sorting in the pre-processing. Note that the computation of the ℓ1-norm in (14) does not require the calculation of absolute values since the elements of y(n) are positive by construction, see (4).
The non-data-driven NOD methods presented here are dependent on a number of parameters: input preparation, pre-processing, reduction, and peak picking parameters. Finding the optimal combination of parameter values is not an easy task. In this paper, the different parameters are tuned in a sequential manner; in other words, the parameters are selected one after another. We will now summarize how the different parameters have been tuned, making use of the tuning dataset introduced in Section 4.1.
The input test melodies are sampled with a sampling frequency fs=44.1 kHz. Each melody is divided into overlapping frames of N=2048 samples (46.4 ms). This frame size value was chosen by looking at the resulting spectrograms for three different frame sizes: 1024, 2048, and 4096. These are all powers of 2 to benefit from using the fast Fourier transform algorithm. By repeating this for various instruments, N=2048 was found the best trade-off between frequency and time resolution in (visually) emphasizing onsets in the spectrogram. By maximizing the F1-score over the tuning dataset, we found the Hanning window offering a better performance than the Hamming window, and the Discrete Fourier Transform outperformed the Discrete Cosine Transform in computing the STFT coefficients. The compression parameter λ was set to 1 as in the Madmom implementation [36]. A frame overlap of 90%, i.e., q=0.9, is used similarly to the baseline method, resulting in a frame rate of r=215 frames per second or a 4.6 ms detection resolution which is better than the temporal hearing resolution (≈10 ms). The evaluation window is set to ±25 ms around the ground-truth onset.
Due to the fact that the NINOS2 feature satisfies the cloning property (D4 in [31]), using only 50% of the STFT coefficients will give the same results. Note that this does not hold for the non-normalized INOS2 feature. The percentage γ of STFT frequency bins used to compute the (N)INOS2 feature is set to value γ=95.5 %, which was found to maximize the F1-score over the tuning dataset for γ values lying between 90 and 99.
The peak-picking parameters α=30 ms and a=100 ms are kept as in [21] as these values resulted in better F1-scores for all the algorithms compared to the values found in [36]. The parameters β and b are set to zero to allow the NOD methods to operate online. The parameter Θ is set equal to the detection window length as stated earlier. Finally, the amplitude offset δ—sometimes referred to as the detection threshold—is tuned per feature to maximize their performance on the tuning dataset.
In this section, the performance of three non-data-driven NOD methods, using the proposed NINOS2 (ℓ2ℓ4) ODF in (13), the proposed INOS2 (ℓ1) ODF in (14), and the state-of-the art LSF ODF in (5), is compared. The pre-processing introduced in Section 3.2 is only applied for the proposed methods NINOS2 and INOS2, but not to LSF. Before considering more quantitative results, we first look at a few examples of the different ODFs and the onset detections resulting from the peak picking. This is illustrated in Figs. 6–8 for three different instruments belonging to different instrument groups: the electric guitar (Fig. 6), the cello (Fig. 7), and the trumpet (Fig. 8). The examples chosen here are excerpts taken from the P dataset.
Comparison of ODFs and peak-picking results for electric guitar (major seventh stopped) excerpt. Vertical lines represent onset detection windows indicating ground-truth onsets. Circles are used to mark true positives in the peak-picking results, while false positives are marked with crosses
Comparison of ODFs and peak-picking results for cello (non-vibrato) excerpt. Vertical lines represent onset detection windows indicating ground-truth onsets. Circles are used to mark true positives in the peak-picking results, while false positives are marked with crosses
Onset detection windows indicating ground-truth onsets are marked with vertical black lines. While circles are used to mark true positives in the peak-picking results, false positives are marked with crosses. Finally, the false negatives are easily noted by unmarked detection windows. By visually analyzing these figures, it can be observed that the (N)INOS2 ODF is remarkably smoother than the LSF ODF and presents higher amplitudes at onset times for the different instruments. The lack of smoothness in the LSF ODF as compared to the (N)INOS2 ODF illustrates the sensitivity of the LSF ODF to nonstationary additive noise, which is partly due to the log-magnitude operation in (4). Indeed, despite the studio recording quality of the MUMS library, very-low-level nonstationary background noise can be observed in the signal spectrograms shown in Figs. 6–8, exhibiting the same temporal variation pattern seen in the LSF ODF. When moving from the easier percussive instrument in Fig. 6 to the more complicated non-percussive instruments in Figs. 7 and 8, it can be seen that the LSF ODF fails to show a clear amplitude rise at onset times. Both a high amplitude and a fast-rising amplitude at onset times are two beneficial ODF properties to have onsets successfully detected in the peak-picking step. As a consequence, the LSF ODF in Figs. 6–8 yields a higher number of false positives and false negatives after peak picking, compared to the (N)INOS2 ODF. A final remark that clearly stands out in the guitar example is that true positives are detected in the (N)INOS2 ODF on the rising edge of an ODF amplitude peak rather than at the maximum of the amplitude peak. This is a consequence of the smoothness of the ODF, resulting in an earlier onset detection which is a useful property for online NOD algorithms.
Comparison of ODFs and peak-picking results for trumpet (Bach) excerpt. Vertical lines represent onset detection windows indicating ground-truth onsets. Circles are used to mark true positives in the peak-picking results, while false positives are marked with crosses
After this qualitative comparison, let us take a more quantitative approach by evaluating the previously described performance measures F1-score and detections standard deviation σd. The values of these measures are shown in Tables 2, 3, 4 and 5, summarizing the results obtained for the different datasets introduced earlier (M, P, MR8, PR8). In each table, while the upper rows show the results for the different instruments organized by instrument groups, the lower rows show the results for a subset of the instruments organized into two groups depending on whether they are played with or without vibrato. The instruments included in this subset are cello, viola, violin, flute, flute alto, flute bass, and piccolo. Each table is vertically split into two parts. While the first part compares the F1-scores for the different methods, the second part compares the detections standard deviation σd. Finally, the Total row shows the weighted average of the results for all instrument groups where the weights are determined as the relative number of instruments per group.
Table 2 NOD performance measures for M dataset
Table 3 NOD performance measures for P dataset
Table 4 NOD performance measures for MR8 dataset
Table 5 NOD performance measures for PR8 dataset
Considering both the F1-score and the detections standard deviation σd results, a few trends can be observed. The baseline LSF ODF generally performs better on monophonic music excerpts, whereas the (N)INOS2 ODF is more suited for polyphonic music excerpts. Performance varies strongly across instrument groups, and for the most challenging group of sustained-strings instruments, which exhibit non-percussive and slow-attack onsets, the (N)INOS2 ODF outperforms the LSF ODF in all datasets. For datasets involving repeated notes, the performance gap between the different methods is slightly reduced, but the same observations can be made regarding monophonic vs. polyphonic performance for the F1-score. The detections standard deviation σd for repeated notes is however smaller with the (N)INOS2 ODF for both monophonic and polyphonic excerpts. For the most challenging dataset in which polyphonic excerpts with repeated notes are considered (PR8), the (N)INOS2 ODF outperforms the LSF ODF for all instrument groups except percussions. Moreover, the (N)INOS2 ODF seems more robust to vibrato than the LSF ODF, which can be observed in particular in the monophonic datasets (M and MR8), where the performance improvement of the (N)INOS2 ODF over the LSF ODF is clearly larger for the vibrato group than for the non-vibrato group. Finally, the performance of the NINOS2 (ℓ2ℓ4) ODF and the INOS2 (ℓ1) ODF is fairly similar, with a consistent trend of the NINOS2 (ℓ2ℓ4) ODF to perform slightly better for non-percussive instruments and the INOS2 (ℓ1) ODF for percussive instruments.
In order to widen the scope of the evaluation presented in this paper, the NOD performance obtained with the proposed and baseline non-data-driven ODFs is also assessed using three publicly available and differently annotated datasets, and a performance comparison with a state-of-the-art data-driven NOD method is included. The results are shown in Table 6. The MAPS CL and MAPS AM datasets are part of the MAPSFootnote 3 dataset [33], and more specifically correspond to the music pieces portion of ENSTDkCl and ENSTDkAm. The suffixes "CL" and "AM" refer to a close-miked and ambient recording technique. Both MAPS datasets were recorded using a Yamaha Disklavier piano in which annotations are obtained directly from the instrument's MIDI output. The MDS dataset [18] is a manually annotated dataset containing audio excerpts from various sources. It was used to train and evaluate the state-of-the-art data-driven NOD method based on a convolutional neural network (CNN) [18], which is also included in the performance comparison shown in Table 6. A 20% portion of this dataset is used for training the network and the remaining 80% is used for testingFootnote 4. In addition, the same CNN is also tested on the two other (MAPS CL and MAPS AM) datasets. From the results in Table 6, we first observe that the variation of the F1-score across the compared methods is larger for the MDS dataset than for the two MAPS datasets. This could either be attributed to the difficulty of the NOD problem in the MDS dataset or to the higher variability of the manual annotations in the MDS dataset, or both. We also observe that while the NOD performance with the proposed (N)INOS2 ODF falls behind that of the other ODFs for the MDS dataset, the (N)INOS2 ODF outperforms both the LSF ODF and CNN for the MAPS datasets. Note that, as shown in [32], the poor performance of the CNN when evaluated on the MAPS datasets can be mitigated by aligning the training and evaluation dataset annotations by means of annotations time shifting.
Table 6 NOD performance measures for publicly available datasets
Conclusion and future work
In this paper, we have proposed two new variants as well as a thorough analysis of the (N)INOS2 spectral-sparsity-based NOD feature introduced earlier in [26]. Instead of focusing on the fundamental frequency and harmonic components of a musical note, as traditional non-data-driven NOD methods do, the (N)INOS2 feature uses the subset of low-energy frequency components which are found to contain valuable information on note onsets. By investigating how measures for energy and sparsity can be combined and normalized, two new spectral-sparsity-based NOD features are introduced, as well as their related ODFs describing the feature variation over time. These can be combined with a baseline peak-picking procedure to obtain a novel non-data-driven NOD method.
Simulation results for a newly developed automatically annotated semi-synthetic dataset include a large and diverse set of instruments, playing styles, and melody mixing options which are reported in terms of the F1-score and a newly introduced measure, the detections standard deviation σd. In terms of the F1-score, the NOD performance using the LSF ODF is better for monophonic music excerpts, whereas the (N)INOS2 ODF performs better for polyphonic music excerpts. In terms of the detections standard deviation σd, the benefit of the LSF ODF over the (N)INOS2 ODF for monophonic excerpts disappears for scenarios with repeated notes. Both performance measures illustrate that the (N)INOS2 ODF is more suitable than the LSF ODF for the most challenging instruments group, i.e., sustained-strings instruments, and playing style, i.e., vibrato performance.
As the proposed INOS2 (ℓ1) ODF is considerably cheaper to implement than the proposed NINOS2 (ℓ2ℓ4) ODF and the baseline LSF ODF, both in terms of computational complexity and memory usage, it seems to be the preferred feature to use for non-data-driven NOD.
Future work could include a deeper performance comparison on publicly available manually annotated NOD datasets, showing results per instrument groups. Also, in future experiments, datasets should preferably be equilibrated in terms of instruments, i.e., the different instrument groups should be populated similarly, which was not the case for the results presented in this paper. Finally, an instrument-dependent parameter tuning of the different methods and its impact on the resulting performance is worth investigating.
The data that support the findings of this study are available from McGill University [35] but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of McGill University [35].
In some publications, the SF ODF is defined by squaring the output of the half-wave rectification operator in (1) before summing over all frequency bins. Also, the lower summation index in (1) and (5) is sometimes chosen as k=−N/2 which results in a scaling of the resulting ODF with a factor of 2, due to the symmetry of the magnitude spectrum and the absence of a DC component.
Code and metadata available at
https://gitlab.esat.kuleuven.be/dsp-public/mix-notes-mina-mounir
Midi Aligned Piano Sounds dataset - freely available under Creative Commons license
MDS excerpts partition in Train/Test folds can be retrieved from: https://gitlab.esat.kuleuven.be/dsp-public/mix-notes-mina-mounir/-/tree/master/Datasets_meta
P. Masri, Computer modelling of sound for transformation and synthesis of musical signals. PhD thesis (University of Bristol, UK, 1996).
M. Mounir, Note onset detection using sparse over-complete representation of musical signals. Master's thesis. University of Lugano, Advanced Learning and Research Institute, Lugano, Switzerland (2013). ftp://ftp.esat.kuleuven.be/stadius/mshehata/mscthesis/mshehatamsc.pdf.
J. P. Bello, L. Daudet, S. Abdallah, C. Duxbury, M. Davies, M. B. Sandler, A tutorial on onset detection in music signals. IEEE Trans. Speech Audio Process.13(5), 1035–1047 (2005).
P. Leveau, L. Daudet, in Proc. 5th Int. Symp. on Music Information Retrieval (ISMIR '04). Methodology and tools for the evaluation of automatic onset detection algorithms in music (International Society for Music Information Retrieval (ISMIR)Barcelona, 2004), pp. 72–75.
E. Benetos, S. Dixon, in Proc. 2011 IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP '11). Polyphonic music transcription using note onset and offset detection (Institute of Electrical and Electronics Engineers (IEEE)Prague, 2011), pp. 37–40.
E. Benetos, S. Dixon, Z. Duan, S. Ewert, Automatic music transcription: an overview. IEEE Signal Process. Mag.36(1), 20–30 (2019).
T. Cheng, M. Mauch, E. Benetos, S. Dixon, in Proc. 17th Int. Symp. on Music Information Retrieval (ISMIR '16). An attack/decay model for piano transcription (International Society for Music Information Retrieval (ISMIR)New York, 2016).
M. F. McKinney, D. Moelants, M. E. P. Davies, A. Klapuri, Evaluation of audio beat tracking and music tempo extraction algorithms. J. New Music Res.36(1), 1–16 (2007).
B. D. Giorgi, M. Zanoni, S. Böck, A. Sarti, Multipath beat tracking. J. Audio Eng. Soc.64(7/8), 493–502 (2016).
P. Masri, A. Bateman, in Proc. Int. Computer Music Conf.(ICMC '96). Improved modelling of attack transients in music analysis-resynthesis (International Computer Music Association (ICMA)Hong Kong, 1996), pp. 100–103.
V. Verfaille, U. Zölzer, D. Arfib, Adaptive digital audio effects (a-DAFx): a new class of sound transformations. IEEE Trans. Audio Speech Lang. Process.14(5), 1817–1831 (2006).
O. Celma, Music recommendation and discovery (Springer, Berlin, 2010).
N. Degara, M. E. P. Davies, A. Pena, M. D. Plumbley, Onset event decoding exploiting the rhythmic structure of polyphonic music. IEEE J. Sel. Top. Signal Process.5(6), 1228–1239 (2011).
S. Böck, A. Arzt, F. Krebs, in Proc. 15th Int. Conf. Digital Audio Effects (DAFx '12). Online real-time onset detection with recurrent neural networks (Digital Audio Effects (DAFx)York, 2012).
M. Mounir, P. Karsmakers, T. van Waterschoot, in Proc. 28th European Signal Process. Conf. (EUSIPCO '20). CNN-based note onset detection using synthetic data augmentation, (2021), pp. 171–175.
M. Lin, Y. Feng, in Proc. 7th Conf. Sound Music Technol. (CSMT '20). A post-processing of onset detection based on verification with neural network (SpringerSingapore, 2020), pp. 67–80.
F. Cong, S. Liu, L. Guo, G. A. Wiggins, in Proc. 2018 IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP '18). A parallel fusion approach to piano music transcription based on convolutional neural network (Institute of Electrical and Electronics Engineers (IEEE)Calgary, 2018), pp. 391–395.
J. Schluter, S. Bock, in Proc. 2014 IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP '14). Improved musical onset detection with convolutional neural networks (Institute of Electrical and Electronics Engineers (IEEE)Florence, 2014), pp. 6979–6983.
MIREX, 2015 onset detection results (2015). http://nema.lis.illinois.edu/nema_out/mirex2015/results/aod/.
S. Böck, F. Krebs, M. Schedl, in Proc. 13th Int. Symp. on Music Information Retrieval (ISMIR '12). Evaluating the online capabilities of onset detection methods (International Society for Music Information Retrieval (ISMIR)Porto, 2012), pp. 49–54.
S. Böck, G. Widmer, in Proc. 14th Int. Symp. on Music Information Retrieval (ISMIR '13). Local group delay based vibrato and tremolo suppression for onset detection (International Society for Music Information Retrieval (ISMIR)Curitiba, 2013), pp. 361–366.
S. Böck, G. Widmer, in Proc. 16th Int. Conf. Digital Audio Effects (DAFx '13). Maximum filter vibrato suppression for onset detection (Digital Audio Effects (DAFx)Maynooth, 2013).
D. Stowell, M. Plumbley, in Proc. Int. Computer Music Conf.(ICMC '07). Adaptive whitening for improved real-time audio onset detection (International Computer Music Association (ICMA)Copenhagen, 2007), pp. 312–319.
C. Liang, L. Su, Y. Yang, Musical onset detection using constrained linear reconstruction. IEEE Signal Process. Lett.22(11), 2142–2146 (2015).
N. Kroher, E. Gómez, Automatic transcription of flamenco singing from polyphonic music recordings. IEEE Trans. Audio Speech Lang. Process.24(5), 901–913 (2016).
M. Mounir, P. Karsmakers, T. van Waterschoot, in Proc. 24th European Signal Process. Conf. (EUSIPCO '16). Guitar note onset detection based on a spectral sparsity measure (European Association for Signal Processing (EURASIP)Budapest, 2016), pp. 978–982.
T. S. Verma, T. H. Y. Meng, in Proc. 1999 IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP '99). Sinusoidal modeling using frame-based perceptually weighted matching pursuits (Institute of Electrical and Electronics Engineers (IEEE)Phoenix, 1999), pp. 981–984.
X. Shao, W. Gui, C. Xu, Note onset detection based on sparse decomposition. Multimed. Tools Appl.75:, 2613–2631 (2016).
J. J. Valero-Mas, J. M. Iñesta, in Proc. 14th Sound Music Comput. Conf. (SMC '17). Experimental assessment of descriptive statistics and adaptive methodologies for threshold establishment in onset selection functions (Sound and Music Computing (SMC) NetworkEspoo, 2017), pp. 117–124.
A. Klapuri, in Proc. 1999 IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP '99), 6. Sound onset detection by applying psychoacoustic knowledge (Institute of Electrical and Electronics Engineers (IEEE)Phoenix, 1999), pp. 3089–3092.
N. Hurley, S. Rickard, Comparing measures of sparsity. IEICE Trans. Inf. Theory. 55(10), 4723–4741 (2009).
M. Mounir, P. Karsmakers, T. van Waterschoot, in Proc. 2019 IEEE Workshop Appls. Signal Process. Audio Acoust. (WASPAA '19). Annotations time shift: a key parameter in evaluating musical note onset detection algorithms (Institute of Electrical and Electronics Engineers (IEEE)New Paltz, 2019), pp. 21–25.
V. Emiya, N. Bertin, B. David, R. Badeau, MAPS – a piano database for multipitch estimation and automatic transcription of music. Technical Report inria-00544155. Télécom ParisTech, France (2010). https://hal.inria.fr/inria-00544155/document.
A. Ycart, E. Benetos, in ISMIR 2018 Late Breaking and Demo Papers. A-MAPS: augmented MAPS dataset with rhythm and key annotations (International Society for Music Information Retrieval (ISMIR)Paris, 2018).
F. Opolko, J. Wapnick, McGill University Master Samples, DVD edition (McGill University, Montreal, QC, Canada, 2006).
S. Böck, F. Korzeniowski, J. Schlüter, F. Krebs, G. Widmer, in Proc. 24th ACM Int. Conf. Multimedia (MM '16). Madmom: a new Python audio and music signal processing library (Association for Computing Machinery (ACM)Amsterdam, 2016), pp. 1174–1178.
This research work was carried out at the ESAT Laboratory of KU Leuven. The research leading to these results has received funding from the KU Leuven Internal Funds C2-16-00449, IMP/14/037, and VES/19/004, from the Flemish Government under the "Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen" program, and from the European Research Council under the European Union's Horizon 2020 research and innovation program/ERC Consolidator Grant: SONORA (no. 773268). This paper reflects only the authors' views, and the Union is not liable for any use that may be made of the contained information.
KU Leuven, Department of Electrical Engineering (ESAT), STADIUS Center for Dynamical Systems, Signal Processing, and Data Analytics, Kasteelpark Arenberg 10, Leuven, 3001, Belgium
Mina Mounir & Toon van Waterschoot
KU Leuven, Department of Computer Science, Declarative Languages and Artificial Intelligence (DTAI), Geel Campus, Kleinhoefstraat 4, Geel, 2440, Belgium
Peter Karsmakers
Mina Mounir
Toon van Waterschoot
MM invented the concept of using a spectral sparsity measure for musical note onset detection. MM, PK, and TVW jointly developed the research methodology to turn this concept into a usable and effective signal processing algorithm. MM, PK, and TVW jointly designed and interpreted the computer simulations. MM implemented the computer simulations and managed the data set used. MM and TVW contributed in writing the manuscript. All authors read and approved the final manuscript.
Correspondence to Toon van Waterschoot.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Mounir, M., Karsmakers, P. & van Waterschoot, T. Musical note onset detection based on a spectral sparsity measure. J AUDIO SPEECH MUSIC PROC. 2021, 30 (2021). https://doi.org/10.1186/s13636-021-00214-7
Note onset detection
Music signal analysis
Sparsity | CommonCrawl |
# PHP syntax and best practices
2. Variables and data types in PHP
3. Arrays and their use in PHP
4. Conditional statements: if, else, elseif
5. Loops: for, while, do-while
6. Functions: creation, arguments, return values
7. Creating a web application with PHP
8. Working with databases in PHP
9. Handling user input and form validation
10. Security considerations in PHP
11. PHP frameworks and libraries
## Exercise
Instructions:
Write a PHP script that takes a user's name as input and displays a personalized greeting.
### Solution
```php
<!DOCTYPE html>
<html>
<head>
<title>Personalized Greeting</title>
</head>
<body>
<form method="post" action="greeting.php">
<label for="name">Enter your name:</label>
<input type="text" name="name" id="name">
<input type="submit" value="Submit">
</form>
</body>
</html>
```
```php
// greeting.php
<?php
$name = $_POST['name'];
echo "Hello, " . $name . "!";
?>
```
## Exercise
Instructions:
Write a PHP script that takes two numbers as input and calculates their sum.
### Solution
```php
<!DOCTYPE html>
<html>
<head>
<title>Sum Calculator</title>
</head>
<body>
<form method="post" action="sum.php">
<label for="num1">Enter the first number:</label>
<input type="number" name="num1" id="num1">
<br>
<label for="num2">Enter the second number:</label>
<input type="number" name="num2" id="num2">
<br>
<input type="submit" value="Calculate Sum">
</form>
</body>
</html>
```
```php
// sum.php
<?php
$num1 = $_POST['num1'];
$num2 = $_POST['num2'];
$sum = $num1 + $num2;
echo "The sum of " . $num1 . " and " . $num2 . " is " . $sum . ".";
?>
```
# Variables and data types in PHP
To declare a variable in PHP, you use the `$` symbol followed by the variable name. You can assign a value to the variable using the `=` operator. For example:
```php
$name = "John Doe";
$age = 30;
$isStudent = true;
```
You can also use the `var_dump()` function to display the data type and value of a variable. For example:
```php
$name = "John Doe";
var_dump($name);
```
This will output:
```
string(8) "John Doe"
```
In PHP, you can also use the `gettype()` function to determine the data type of a variable. For example:
```php
$name = "John Doe";
echo gettype($name);
```
This will output:
```
string
```
PHP also supports variable variables, which allow you to change the name of a variable dynamically. For example:
```php
$varname = "name";
$$varname = "John Doe";
echo $name;
```
This will output:
```
John Doe
```
## Exercise
Instructions:
Instructions:
Create a PHP script that declares and initializes three variables: `$firstName`, `$lastName`, and `$age`. Then, use the `var_dump()` function to display the data types and values of these variables.
### Solution
```php
<?php
$firstName = "John";
$lastName = "Doe";
$age = 30;
var_dump($firstName, $lastName, $age);
?>
```
This will output:
```
string(4) "John"
string(3) "Doe"
int(30)
```
# Arrays and their use in PHP
To create an array in PHP, you can use the `array()` function or the shorthand syntax `[]`. For example:
```php
$fruits = array("apple", "banana", "orange");
$fruits = ["apple", "banana", "orange"];
```
You can access array elements using their index (starting from 0). For example:
```php
$fruits = ["apple", "banana", "orange"];
echo $fruits[0];
```
This will output:
```
apple
```
You can also use the `count()` function to determine the number of elements in an array. For example:
```php
$fruits = ["apple", "banana", "orange"];
echo count($fruits);
```
This will output:
```
3
```
PHP provides several functions for working with arrays, such as `array_push()`, `array_pop()`, and `array_merge()`. For example:
```php
$fruits = ["apple", "banana", "orange"];
array_push($fruits, "grape");
echo count($fruits);
```
This will output:
```
4
```
## Exercise
Instructions:
Instructions:
Create a PHP script that declares an array of fruits and appends a new fruit to the array. Then, use the `count()` function to display the number of elements in the array.
### Solution
```php
<?php
$fruits = ["apple", "banana", "orange"];
array_push($fruits, "grape");
echo count($fruits);
?>
```
This will output:
```
4
```
# Conditional statements: if, else, elseif
The if statement is used to execute a block of code if a condition is true. For example:
```php
$age = 18;
if ($age >= 18) {
echo "You are an adult.";
}
```
The else statement is used to execute a block of code if the condition in the if statement is false. For example:
```php
$age = 17;
if ($age >= 18) {
echo "You are an adult.";
} else {
echo "You are not an adult.";
}
```
The elseif statement is used to check multiple conditions. For example:
```php
$age = 17;
if ($age >= 65) {
echo "You are a senior citizen.";
} elseif ($age >= 18) {
echo "You are an adult.";
} else {
echo "You are not an adult.";
}
```
The ternary operator is a shorthand way to write an if-else statement. For example:
```php
$age = 18;
$status = ($age >= 18) ? "adult" : "not adult";
echo "You are " . $status;
```
This will output:
```
You are adult
```
## Exercise
Instructions:
Instructions:
Create a PHP script that takes a number as input and prints whether it is positive, negative, or zero.
### Solution
```php
<!DOCTYPE html>
<html>
<head>
<title>Number Classification</title>
</head>
<body>
<form method="post" action="classify.php">
<label for="number">Enter a number:</label>
<input type="number" name="number" id="number">
<input type="submit" value="Classify">
</form>
</body>
</html>
```
```php
// classify.php
<?php
$number = $_POST['number'];
if ($number > 0) {
echo "The number is positive.";
} elseif ($number < 0) {
echo "The number is negative.";
} else {
echo "The number is zero.";
}
?>
```
# Loops: for, while, do-while
The for loop is used to iterate over a block of code a specific number of times. For example:
```php
for ($i = 0; $i < 5; $i++) {
echo $i;
}
```
This will output:
```
01234
```
The while loop is used to iterate over a block of code as long as a condition is true. For example:
```php
$i = 0;
while ($i < 5) {
echo $i;
$i++;
}
```
This will output:
```
01234
```
The do-while loop is similar to the while loop, but the condition is checked after the block of code is executed. For example:
```php
$i = 0;
do {
echo $i;
$i++;
} while ($i < 5);
```
This will output:
```
01234
```
## Exercise
Instructions:
Instructions:
Create a PHP script that prints the numbers from 1 to 10 using a for loop.
### Solution
```php
<?php
for ($i = 1; $i <= 10; $i++) {
echo $i;
}
?>
```
This will output:
```
12345678910
```
# Functions: creation, arguments, return values
To create a function in PHP, you use the `function` keyword followed by the function name and a pair of parentheses. For example:
```php
function greet($name) {
echo "Hello, " . $name . "!";
}
```
You can pass arguments to a function by including them in the parentheses. For example:
```php
function add($num1, $num2) {
return $num1 + $num2;
}
```
You can return a value from a function using the `return` keyword. For example:
```php
function add($num1, $num2) {
return $num1 + $num2;
}
$sum = add(5, 3);
echo $sum;
```
This will output:
```
8
```
## Exercise
Instructions:
Instructions:
Create a PHP script that defines a function called `multiply` that takes two arguments and returns their product. Then, call the `multiply` function with two numbers and display the result.
### Solution
```php
<?php
function multiply($num1, $num2) {
return $num1 * $num2;
}
$product = multiply(4, 3);
echo $product;
?>
```
This will output:
```
12
```
# Creating a web application with PHP
To connect to a database in PHP, you can use the `mysqli` or `PDO` extension. For example, using `mysqli`:
```php
$servername = "localhost";
$username = "username";
$password = "password";
$dbname = "myDB";
$conn = new mysqli($servername, $username, $password, $dbname);
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
```
To handle user input in PHP, you can use the `$_GET`, `$_POST`, and `$_REQUEST` superglobals. For example:
```php
$name = $_GET['name'];
$age = $_POST['age'];
$email = $_REQUEST['email'];
```
To secure your PHP application, you should follow best practices such as validating user input, sanitizing data, and using prepared statements when interacting with a database.
## Exercise
Instructions:
Instructions:
Create a PHP script that connects to a database, retrieves data from a table, and displays the data in a table format.
### Solution
```php
<?php
// Database connection
$servername = "localhost";
$username = "username";
$password = "password";
$dbname = "myDB";
$conn = new mysqli($servername, $username, $password, $dbname);
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
// Retrieve data from table
$sql = "SELECT id, name, age FROM users";
$result = $conn->query($sql);
// Display data in table format
echo "<table>";
echo "<tr><th>ID</th><th>Name</th><th>Age</th></tr>";
if ($result->num_rows > 0) {
while ($row = $result->fetch_assoc()) {
echo "<tr><td>" . $row["id"] . "</td><td>" . $row["name"] . "</td><td>" . $row["age"] . "</td></tr>";
}
} else {
echo "<tr><td colspan='3'>No results</td></tr>";
}
echo "</table>";
$conn->close();
?>
```
# Working with databases in PHP
To connect to a database in PHP, you can use the `mysqli` or `PDO` extension. For example, using `mysqli`:
```php
$servername = "localhost";
$username = "username";
$password = "password";
$dbname = "myDB";
$conn = new mysqli($servername, $username, $password, $dbname);
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
```
To perform CRUD operations (Create, Read, Update, Delete) on a database, you can use SQL statements such as `INSERT`, `SELECT`, `UPDATE`, and `DELETE`. For example:
```php
// Create a new user
$sql = "INSERT INTO users (name, age) VALUES ('John Doe', 30)";
$conn->query($sql);
// Read all users
$sql = "SELECT id, name, age FROM users";
$result = $conn->query($sql);
// Update a user
$sql = "UPDATE users SET age = 31 WHERE id = 1";
$conn->query($sql);
// Delete a user
$sql = "DELETE FROM users WHERE id = 1";
$conn->query($sql);
```
To query data from a database, you can use the `mysqli` or `PDO` extension's functions such as `query()`, `fetch_assoc()`, and `num_rows`. For example:
```php
$sql = "SELECT id, name, age FROM users";
$result = $conn->query($sql);
if ($result->num_rows > 0) {
while ($row = $result->fetch_assoc()) {
echo "ID: " . $row["id"] . " - Name: " . $row["name"] . " - Age: " . $row["age"] . "<br>";
}
} else {
echo "No results";
}
```
## Exercise
Instructions:
Instructions:
Create a PHP script that connects to a database, inserts a new user, and retrieves all users from the database.
### Solution
```php
<?php
// Database connection
$servername = "localhost";
$username = "username";
$password = "password";
$dbname = "myDB";
$conn = new mysqli($servername, $username, $password, $dbname);
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
// Insert a new user
$sql = "INSERT INTO users (name, age) VALUES ('John Doe', 30)";
$conn->query($sql);
// Retrieve all users
$sql = "SELECT id, name, age FROM users";
$result = $conn->query($sql);
if ($result->num_rows > 0) {
while ($row = $result->fetch_assoc()) {
echo "ID: " . $row["id"] . " - Name: " . $row["name"] . " - Age: " . $row["age"] . "<br>";
}
} else {
echo "No results";
}
$conn->close();
?>
```
# Handling user input and form validation
To handle user input in PHP, you can use the `$_GET`, `$_POST`, and `$_REQUEST` superglobals. For example:
```php
$name = $_GET['name'];
$age = $_POST['age'];
$email = $_REQUEST['email'];
```
To sanitize user input, you can use the `filter_input()` function. For example:
```php
$name = filter_input(INPUT_GET, 'name', FILTER_SANITIZE_STRING);
```
To validate user input, you can use the `filter_input()` function with custom filters. For example:
```php
$age = filter_input(INPUT_POST, 'age', FILTER_VALIDATE_INT, array('options' => array('min_range' => 1, 'max_range' => 120)));
```
## Exercise
Instructions:
Instructions:
Create a PHP script that takes a user's name and age as input from a form and validates the input. If the input is valid, display a personalized greeting.
### Solution
```php
<!DOCTYPE html>
<html>
<head>
<title>User Input Validation</title>
</head>
<body>
<form method="post" action="validate.php">
<label for="name">Enter your name:</label>
<input type="text" name="name" id="name">
<br>
<label for="age">Enter your age:</label>
<input type="number" name="age" id="age">
<br>
<input type="submit" value="Submit">
</form>
</body>
</html>
```
```php
// validate.php
<?php
$name = filter_input(INPUT_POST, 'name', FILTER_SANITIZE_STRING);
$age = filter_input(INPUT_POST, 'age', FILTER_VALIDATE_INT, array('options' => array('min_range' => 1, 'max_range' => 120)));
if ($name && $age) {
echo "Hello, " . $name . "! You are " . $age . " years old.";
} else {
echo "Invalid input.";
}
?>
```
# Security considerations in PHP
To secure your PHP application, you should follow best practices such as validating user input, sanitizing data, and using prepared statements when interacting with a database.
## Exercise
Instructions:
Instructions:
Create a PHP script that connects to a database, retrieves data from a table, and displays the data in a table format. Ensure that you follow best practices for securing the script.
### Solution
```php
<?php
// Database connection
$servername = "localhost";
$username = "username";
$password = "password";
$dbname = "myDB";
$conn = new mysqli($servername, $username, $password, $dbname);
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
// Retrieve data from table
$sql = "SELECT id, name, age FROM users";
$result = $conn->query($sql);
// Display data in table format
echo "<table>";
echo "<tr><th>ID</th><th>Name</th><th>Age</th></tr>";
if ($result->num_rows > 0) {
while ($row = $result->fetch_assoc()) {
echo "<tr><td>" . $row["id"] . "</td><td>" . $row["name"] . "</td><td>" . $row["age"] . "</td></tr>";
}
} else {
echo "<tr><td colspan='3'>No results</td></tr>";
}
echo "</table>";
$conn->close();
?>
```
# PHP frameworks and libraries
Some popular PHP frameworks include Laravel, Symfony, and CodeIgniter. These frameworks provide a set of tools and conventions to make it easier to build web applications.
Some popular PHP libraries include PHPUnit for testing, Composer for dependency management, and Guzzle for making HTTP requests. These libraries can help you write efficient and maintainable code.
## Exercise
Instructions:
Instructions:
Create a simple PHP application using the Laravel framework.
### Solution
1. Install Laravel using Composer:
```
composer global require laravel/installer
```
2. Create a new Laravel project:
```
laravel new my-app
```
3. Change into the project directory:
```
cd my-app
```
4. Run the development server:
```
php artisan serve
```
5. Open your browser and navigate to `http://localhost:8000` to see your Laravel application.
# Variables and data types in PHP
To declare a variable in PHP, you use the `$` symbol followed by the variable name. You can assign a value to the variable using the `=` operator. For example:
```php
$name = "John Doe";
$age = 30;
$isStudent = true;
```
You can also use the `var_dump()` function to display the data type and value of a variable. For example:
```php
$name = "John Doe";
var_dump($name);
```
This will output:
```
string(8) "John Doe"
```
In PHP, you can also use the `gettype()` function to determine the data type of a variable. For example:
```php
$name = "John Doe | Textbooks |
\begin{definition}[Definition:Trillion/Linguistic Note]
The word '''trillion''' derives from the prefix '''tri-''', meaning '''threefold''', and '''million''', originating as the number $1 \, 000 \, 000$ to the power of $3$.
\end{definition} | ProofWiki |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.